AI Governance
AI governance in The One AI Platform helps MSPs enforce responsible AI usage across their organization — usage policies, compliance frameworks, risk management, and shadow AI detection.
Usage Policies
Define what your team can and cannot do with AI:
- Navigate to Settings → Governance → Policies
- Create policies for data handling, model usage, and output restrictions
- Assign policies to teams or the entire organization
- The platform enforces policies automatically during AI interactions
| Policy Type | What It Controls |
|---|---|
| Data Classification | Which data sensitivity levels can be sent to AI models |
| Model Restrictions | Which models teams can access |
| Output Review | Whether AI-generated content requires human review before use |
| Prompt Guardrails | Blocked topics or restricted prompt patterns |
Compliance Frameworks
Map your AI usage to compliance requirements:
- SOC 2 — Audit trail of all AI interactions
- HIPAA — PHI detection and blocking in prompts
- NIST AI RMF — Risk assessment for AI usage patterns
- Internal policies — Custom compliance checks
Risk Dashboard
The risk dashboard shows:
- AI usage patterns across your organization
- Policy violations and near-misses
- Data classification of prompts sent to models
- Model usage distribution
Shadow AI Detection
Identify unsanctioned AI tool usage:
- Monitor for external AI service access
- Flag usage of non-approved models
- Provide approved alternatives to redirect shadow AI usage
ℹ️Governance policies apply across all AI Platform features — Jarvis chat, custom agents, prompt templates, and the Studio app builder.
Next Steps
- Usage Analytics — Track AI consumption
- Usage Quotas — Set spending limits
- Custom Agents — Build governed AI agents