Why must governance come before deployment?
Most organisations treat governance as a compliance gate at the end of a deployment process. The team builds the system, integrates it, runs a pilot, and then asks the compliance team to approve it. This sequence fails for three reasons:- Compliance teams raise blockers that require redesign. If the governance framework is not considered during design, the system may collect data it should not, make decisions it cannot explain, or lack the audit trail regulators expect.
- Retroactive governance is superficial. A governance layer added after deployment typically covers what the system does, not why it was chosen, how alternatives were evaluated, or what limits apply. Regulators expect the full decision chain.
- Accountability gaps become visible under scrutiny. When a regulator or auditor asks “who approved this AI application and on what basis?”, the organisation needs a documented answer — not a retroactive memo.
What does an AI governance framework include?
A governance framework for AI in regulated organisations has five components:1. Use case approval
Before any AI system is built or bought, the use case must be approved through a structured process. This means:- Risk tiering. Classify the use case by risk level. A chatbot answering FAQ questions is low risk. An AI system making credit decisions is high risk. The governance burden should be proportionate to the risk tier.
- Purpose definition. Document what the system will do, what data it will use, what decisions it will make or support, and who will be affected.
- Alternative assessment. Document why AI is the right approach and what alternatives were considered.
2. Decision rights
Define who can approve, modify, override, and retire AI systems.| Role | Responsibility |
|---|---|
| Business owner | Owns the use case, defines requirements, accountable for outcomes |
| AI/ML team | Builds or configures the system, responsible for technical performance |
| Risk and compliance | Evaluates regulatory implications, approves risk classification |
| Data protection officer | Assesses data handling, consent, and GDPR compliance |
| Executive sponsor | Approves high-risk deployments, escalation point for governance disputes |
3. Guardrail design
Guardrails are the operational limits that constrain AI behaviour. They fall into four categories:- Input guardrails. Filter or validate inputs before the AI processes them. Examples: reject prompts containing sensitive data categories, validate input data against expected schemas, rate-limit API calls.
- Output guardrails. Review or constrain outputs before they reach the user or downstream system. Examples: confidence thresholds below which the output is routed to human review, banned content filters, output format validation.
- Human-in-the-loop triggers. Define scenarios where the AI output must be reviewed by a human before action is taken. Examples: all decisions above a value threshold, all outputs where the model confidence is below a defined level, any output affecting a vulnerable customer.
- Drift monitors. Track whether the AI system’s behaviour is changing over time. Examples: statistical tests comparing current output distributions against baseline, alert thresholds for accuracy degradation, scheduled model revalidation.
4. Audit trail
Every AI decision in a regulated environment must be traceable. The audit trail should document:- What input the system received
- What output it produced
- What model version produced it
- Whether a human reviewed or overrode the output
- What guardrails were active at the time
- When the decision was made
5. Review and retirement
Governance does not end at deployment. Active AI systems require:- Scheduled reviews. Quarterly or bi-annual assessments of performance, fairness, and continued relevance.
- Incident response. A defined process for handling AI failures, including root cause analysis, remediation, and regulatory notification where required.
- Retirement criteria. Conditions under which the system should be switched off. Examples: performance drops below an accuracy threshold, the regulatory environment changes, the use case is superseded.
How should governance scale with risk?
Not every AI application requires the same governance intensity. A risk-tiered approach matches governance effort to potential impact.| Risk tier | Characteristics | Governance requirements |
|---|---|---|
| Low | No customer-facing decisions, no sensitive data, advisory only | Documented purpose, basic guardrails, annual review |
| Medium | Customer-facing outputs, uses personal data, supports (not makes) decisions | Full use case approval, human-in-the-loop, quarterly review, audit trail |
| High | Makes or materially influences regulated decisions (credit, insurance, clinical) | Executive approval, comprehensive guardrails, continuous monitoring, external audit, regulatory notification |
Common mistakes
- Selecting high-risk use cases before governance exists. Starting with the highest-impact use case is tempting but dangerous. If the governance framework does not exist, the first failure will set back the entire AI programme.
- Treating compliance as a final gate. Compliance teams need to be involved from use case selection, not invited at the end. Late involvement means late blockers.
- Building governance on paper only. A governance policy document is necessary but not sufficient. Governance must be operationalised — embedded in workflows, automated where possible, and regularly tested.
- Ignoring model drift. AI systems degrade. Input data changes, user behaviour shifts, and model accuracy declines. Without drift monitoring, governance is a snapshot, not a system.
- Failing to document the decision path. Regulators do not just ask “what did the system do?” They ask “why was this system chosen, who approved it, and how was it monitored?” The full chain must be documented.
- Applying the same governance to every use case. Over-governing low-risk applications slows adoption. Under-governing high-risk applications creates compliance gaps. Risk tiering is essential.
Key takeaways
- AI governance must be designed before deployment, not added after. Regulators expect a documented decision chain from use case selection through to monitoring.
- Five components: use case approval, decision rights, guardrail design, audit trail, and review/retirement.
- Guardrails come in four types: input, output, human-in-the-loop, and drift monitoring. Match intensity to risk tier.
- Risk tiering ensures governance effort is proportionate. Low-risk applications need basic guardrails. High-risk applications need comprehensive monitoring and external audit.
- Governance is operational, not documentary. A policy without workflow integration is a compliance gap.
Related pages
- How to select AI use cases in regulated markets
- Security and deployment for AI in regulated environments
- AI in regulated markets knowledge
- AI Maturity Roadmap framework
- Evidence packs for procurement
- Vendor risk and assurance
- Business AI Alliance
- Consent and risk in open banking
- Identity and trust infrastructure in open banking