Definition
AI use case selection is the process of choosing which problems to solve with AI in environments where regulatory, compliance, and reputational constraints limit what can be deployed. In regulated markets — banking, insurance, healthcare, public sector — selecting the wrong use case wastes time and erodes trust. Selecting the right one creates a reference case that unlocks the next ten. The discipline is not about finding the most impressive AI capability. It is about finding the problem where AI delivers measurable value within the constraints the organisation will actually accept.When it matters
Why does use case selection matter more in regulated markets?
In unregulated markets, a failed AI experiment costs time and money. In regulated markets, it can cost compliance standing, customer trust, or regulatory sanctions. The risk surface is higher, so the selection bar must be higher. Regulated organisations also move more slowly. A poor first use case does not just fail — it poisons the next conversation. Leaders who approved a failed AI project become reluctant to approve the next one. The organisation learns that AI is risky rather than useful. Colin Carmichael argues that Scotland’s tendency is to spread effort across many use cases rather than selecting a small number and executing well. He advocates picking one to three national AI use cases that are large enough to shift economic outcomes and narrow enough to deliver within 90 days.When should a country or sector shortlist use cases nationally?
National use case shortlisting matters when public sector demand can be aggregated across multiple buyers. Scotland has 32 local councils plus central government, NHS boards, and agencies. Many of them face the same problems — planning approvals, document processing, fraud detection, service triage. Aggregating demand creates a market large enough to justify building a product, not just a pilot. Publishing a national list of AI-suitable problems creates transparency. Startups can build on real demand rather than guessing what buyers might want.How it works
What criteria should guide AI use case selection?
Three dimensions determine whether a use case is worth pursuing: 1. Impact — does solving this problem move a metric that matters? In regulated markets, impact means measurable improvement in cost, speed, accuracy, or customer outcome. Not a demonstration. A measurable shift. 2. Risk appetite — will the organisation accept AI in this process? High-risk processes (credit decisions, clinical diagnosis, regulatory reporting) require more governance, more oversight, and longer approval timelines. Lower-risk processes (document classification, internal workflow, customer triage) can be deployed faster with less friction. 3. Execution readiness — does the data exist, is it accessible, and can the team deliver? The most impactful use case is worthless if the data is siloed, unstructured, or politically contested. Execution readiness also includes whether the organisation has the technical team, the deployment infrastructure, and the change management capacity to absorb the solution.How to score and rank use cases
Use a simple scoring matrix:| Criterion | Question | Scoring |
|---|---|---|
| Impact | What metric shifts, by how much? | High / Medium / Low |
| Risk appetite | Will the organisation accept AI here? | Ready / Cautious / Resistant |
| Data readiness | Is the data accessible and structured? | Available / Partial / Blocked |
| Delivery speed | Can a working version ship in 90 days? | Yes / Possibly / No |
| Reusability | Can the solution serve multiple buyers? | Yes / Adapted / No |
Practical steps
How to run a use case selection process
- Map the problem landscape — list every process where AI could plausibly add value. In a bank, this might be 30 to 50 processes. In a public sector body, it might be 20 to 30.
- Filter by risk appetite — remove any process where the organisation’s risk, compliance, or legal teams will not accept AI in the current regulatory environment. Do not try to change their minds at this stage. Accept the constraint.
- Score remaining candidates — use the matrix above. Score each candidate on impact, data readiness, delivery speed, and reusability.
- Select one to three — pick the top-ranked candidates. Do not try to run five or ten in parallel. Regulated organisations have limited capacity to absorb change. One well-executed deployment is worth more than five stalled pilots.
- Define the 90-day prototype — for each selected use case, define what a working version looks like in 90 days. Not a proof of concept. A working version that handles real data and delivers measurable improvement.
- Set success criteria before starting — define what metrics will be tracked, what improvement is expected, and what happens if the criteria are met (production deployment, not another pilot).
Common mistakes
- Selecting the most impressive use case instead of the most deliverable one. Executive demos do not create organisational learning. Production deployments do.
- Trying to solve too many problems at once — regulated organisations have limited change capacity. One successful deployment teaches the organisation more than five stalled pilots.
- Ignoring data readiness — the most impactful use case is useless if the data is inaccessible, unstructured, or politically contested. Check data readiness before committing.
- Skipping risk and compliance alignment — deploying AI in a regulated process without early risk team involvement creates organisational antibodies that block future projects.
- Measuring by technology capability rather than by business outcome — the question is not whether the model works, but whether the process improves measurably.
Key takeaways
In regulated markets, the right first AI use case is rarely the most ambitious one. It is the one that delivers measurable value within constraints the organisation will accept, using data that already exists, in a timeframe short enough to maintain executive attention. Selecting fewer use cases and executing them well builds the organisational trust that makes the next ten possible. Spreading effort across many candidates produces innovation theatre — activity without outcomes.Related pages
- AI in regulated markets — parent knowledge area
- Governance and guardrails — risk and compliance frameworks for AI deployment
- Security and deployment — technical deployment considerations
- AI Maturity Roadmap — assessing organisational readiness for AI
- Pilot to production — converting successful pilots to production
- Building Scotland Conversations — source conversations
- Colin Carmichael conversation — national AI use case shortlisting
- Scotland in the AI age — policy whitepaper on five strategic foundations
- AI for banking leadership — AI adoption in financial services
- Consent and risk in open banking — consent requirements for data-dependent use cases