Alert Routing
Alert routing determines how incoming signals — from RMM agents, Defend detections, Security events, Backups, and custom webhooks — become on-call incidents and which escalation policy fires.
How Alerts Become Incidents
- An alert arrives at the webhook ingestion endpoint (
POST /api/webhooks/ingest?key=<key>) - On-Call extracts
title,severity, anddedup_keyfrom the payload - If a matching open alert exists with the same
dedup_key→ the alert is deduplicated (merged, not a new incident) - Otherwise → a new
AlertDocis created with statustriggered - If alert grouping is configured → the alert is grouped into an existing open incident within the grouping window
- Otherwise → a new
IncidentDocis created referencing the alert - The incident's service escalation policy begins firing
Alert Sources
RMM Integration
RMM critical device alerts route to On-Call via the bus event integration. When a critical threshold is breached on a monitored device, the RMM fires a bus event that On-Call receives and creates an incident.
- Source field:
rmm - Source ref: RMM alert/device ID
- Severity mapping: RMM severity maps directly to On-Call severity
Defend Integration
Defend creates on-call incidents directly via the internal endpoint. See Defend Alert Integration for full details.
- Source field:
defend - Source ref: Defend alert ID (dedup key prevents duplicate incidents per detection)
Security Integration
The One Security events (M365 posture changes, dark web monitoring hits) arrive via the bus event listener. Only critical and high severity events trigger on-call incidents.
- Source field:
security - Severity threshold: Only critical and high severity events create incidents; lower severity events are dropped
Backups Integration
Backup failure events arrive via the bus event listener. On-Call creates an incident for each backup failure event received.
- Source field:
backups - Common payloads: Backup job failure, replication failure, storage quota exceeded
PSA Integration
PSA SLA breach events can be routed to On-Call for after-hours escalation. Configure via a custom webhook from the PSA or through the bus integration.
Custom Webhooks
Any monitoring tool can send alerts via the generic webhook endpoint:
URL: https://api.theoneoncall.app/api/webhooks/ingest?key=<integration_key>
Method: POST
Content-Type: application/json
Auto-Extraction Rules
On-Call extracts fields from the raw payload using these priority rules:
Severity extraction (first match wins):
severityfield: mapscritical/high/medium/low/infodirectlypriorityfield: maps1→critical,2→high,3→medium,4→low,5→infolevelfield: mapserror/critical→high,warning→medium,info→low
Title extraction (first match wins):
titlefieldsummaryfieldmessagefieldnamefield- Fallback:
"Alert from <integration_name>"
Dedup key extraction (first match wins):
dedup_keyfieldfingerprintfieldidfield- If none found: no deduplication (every POST creates a new alert)
Example Payload
{
"title": "CPU usage critical on client-server-01",
"severity": "critical",
"dedup_key": "rmm-device-4821-cpu",
"body": "CPU sustained at 98% for 5 minutes. Host: client-server-01.",
"source_ref": "alert-4821"
}
Alert Grouping
Alert grouping reduces incident noise by combining related alerts into a single incident rather than creating one incident per alert.
Time-Based Grouping
Enabled by setting alert_grouping: "time" on the service. All alerts arriving within alert_grouping_timeout_minutes of an existing open alert with the same service and severity are grouped into the same incident.
Example: Five disk-full alerts arrive within 2 minutes from different devices. With 5-minute time grouping, all five become one incident — one page, one acknowledgment, one resolution.
No Grouping
Setting alert_grouping: "none" creates a separate incident for every alert. Use this for high-severity services where each alert needs independent tracking.
Deduplication
Deduplication prevents alert storms from creating duplicate incidents. If an alert arrives with a dedup_key that matches an existing open alert, On-Call merges the new alert's data into the existing record without creating a new incident or triggering additional escalations.
Best practice: Always set dedup_key to a stable identifier for the underlying condition (e.g., device-id + check-type). If the dedup key changes on every alert, deduplication will not work.
Routing Rules
Each webhook integration supports routing rules that filter or transform incoming alerts:
- Severity filter: Only pass through alerts above a minimum severity
- Title match: Only pass through alerts whose title contains a specific string
- Custom field match: Route based on any field in the raw payload
Routing rules are configured on the integration (Alert Sources → edit integration). Alerts that don't match the rules are dropped silently — no incident, no notification.
PUT /api/integrations/:id). A visual rule builder in the UI is on the roadmap.Alert States
| State | Description |
|---|---|
triggered | Alert received; incident created; escalation active |
acknowledged | On-call technician acknowledged; escalation paused |
grouped | Alert merged into an existing incident |
resolved | Alert and incident closed |
suppressed | Alert received during maintenance window or after dedup |
Viewing Raw Alerts
- Click Incidents in the sidebar.
- Open an incident.
- Scroll to the Alerts section — shows all alerts grouped into this incident.
- Click any alert to view the raw JSON payload from the source.
This is useful for debugging misconfigured integrations or unexpected severity mappings.