Skip to main content

AI Governance

AI governance in The One AI Platform helps MSPs enforce responsible AI usage across their organization — usage policies, compliance frameworks, risk management, and shadow AI detection.

Usage Policies

Define what your team can and cannot do with AI:

  1. Navigate to Settings → Governance → Policies
  2. Create policies for data handling, model usage, and output restrictions
  3. Assign policies to teams or the entire organization
  4. The platform enforces policies automatically during AI interactions
Policy TypeWhat It Controls
Data ClassificationWhich data sensitivity levels can be sent to AI models
Model RestrictionsWhich models teams can access
Output ReviewWhether AI-generated content requires human review before use
Prompt GuardrailsBlocked topics or restricted prompt patterns

Compliance Frameworks

Map your AI usage to compliance requirements:

  • SOC 2 — Audit trail of all AI interactions
  • HIPAA — PHI detection and blocking in prompts
  • NIST AI RMF — Risk assessment for AI usage patterns
  • Internal policies — Custom compliance checks

Risk Dashboard

The risk dashboard shows:

  • AI usage patterns across your organization
  • Policy violations and near-misses
  • Data classification of prompts sent to models
  • Model usage distribution

Shadow AI Detection

Identify unsanctioned AI tool usage:

  • Monitor for external AI service access
  • Flag usage of non-approved models
  • Provide approved alternatives to redirect shadow AI usage
ℹ️Governance policies apply across all AI Platform features — Jarvis chat, custom agents, prompt templates, and the Studio app builder.

Next Steps