Skip to main content

Behavioral AI Baseline

Defend builds a behavioral baseline for every enrolled device, then uses AI-powered anomaly scoring to detect deviations that may indicate compromise. This catches threats that signature-based rules miss — novel malware, living-off-the-land attacks, and insider threats.

How the Baseline Is Built

Learning Phase (Days 1–7)

When a device first enrolls, it enters a 7-day learning period:

  • Defend collects 30 days of historical telemetry snapshots (if available from prior RMM data)
  • The AI builds a feature baseline across six dimensions
  • No anomaly alerts fire during learning mode — only informational notes
  • After 7 days, the device graduates to active scoring

What Is Baselined

FeatureWhat It Measures
Process frequencyWhich processes run and how often (normalized distribution)
Network diversityAverage daily external IPs contacted, known domains whitelist
File write patternsWhich directories see file writes and at what rate
Login timing24-hour histogram of logon events (detects off-hours activity)
Application countTypical number of unique processes per day
Registry modification rateDaily registry change volume (Windows only)

Active Phase (Day 7+)

Once graduated, every 30 minutes the system:

  1. Takes a snapshot of the device's last 24 hours of activity
  2. Extracts features matching the baseline dimensions
  3. Submits to the AI Gateway for anomaly scoring
  4. Receives a score from 0 (normal) to 100 (highly anomalous)

Anomaly Scoring

ScoreSeverityAction
90–100HighAlert created, investigation recommended
75–89MediumAlert created
60–74InformationalLogged, no alert
0–59NormalNo action

The default alert threshold is 75. You can adjust this per organization in Settings → Behavioral AI — lowering the threshold increases sensitivity (more alerts), raising it reduces noise.

AI Confidence Scores

Each anomaly alert includes a confidence breakdown showing which features contributed most to the score:

  • "Process frequency anomaly: 92 — 47 new processes not seen in baseline"
  • "Network diversity anomaly: 78 — contacted 23 external IPs vs. baseline of 5"
  • "Login timing anomaly: 85 — logon at 3:14 AM, typical hours are 8 AM–6 PM"

This helps analysts quickly determine whether the anomaly is a genuine threat or a change in legitimate behavior.

Baseline Reset

After major software changes on an endpoint (OS upgrade, application deployment, role change), the baseline may produce false positives. To reset:

  1. Navigate to the device in the Defend console
  2. Click Reset Baseline
  3. The device re-enters a 7-day learning period
  4. Previous baseline history is archived (not deleted)

False Positive Management

When an anomaly alert is a false positive:

  1. Mark the detection as False Positive
  2. The AI incorporates this feedback into future scoring for that device
  3. Over time, the system learns what "normal" looks like for each endpoint

The ML pipeline runs weekly retraining (Sundays) that incorporates analyst labels — confirmed threats, false positives, and benign-but-unusual events — to continuously improve scoring accuracy.

ℹ️Behavioral AI works best when analysts consistently label detections. Each True Positive and False Positive label improves the model for all Defend customers through the anonymized ML training pipeline.

Insider Threat Detection

Defend's behavioral AI also feeds into insider threat scoring by detecting:

  • Unusual data access patterns (bulk file reads from sensitive directories)
  • Privilege escalation attempts outside normal admin workflows
  • Off-hours activity from user accounts flagged in People as departing

These signals are correlated with People product data when the integration is active.

Next Steps