Self-assessment version. The facilitated diagnostic includes independent validation, layer-by-layer analysis, and a board-ready Go / Conditional Go / No-Go brief with explicit approval conditions.

Request Facilitated Diagnostic
Layer 1 of 9: Decision Criticality & Scope 0% complete
Layer 1 of 9
Decision Criticality & Scope
Establishes whether the AI system is operating in a decision domain where errors carry material clinical, financial, or reputational consequences.
High-stakes: Errors are difficult to detect or reverse
Question 1
Is the AI system's decision scope precisely defined — including what it is permitted to decide, recommend, flag, or withhold — and is this documented and agreed with clinical and operational leadership?
Satisfied — 3 pointsThe system's decision scope is formally documented, approved by relevant leadership, and operationally enforced through technical constraints.
Partial — 2 pointsA general scope definition exists but lacks formal approval or complete technical enforcement boundaries.
Not Satisfied — 0 pointsThe system's scope is informal or poorly defined. Scope creep is a real risk.
Question 2
Has the system been explicitly classified by decision criticality — and are higher-criticality functions subject to stricter governance controls than lower-criticality ones?
Satisfied — 3 pointsA formal criticality classification framework is in place. High-criticality functions have additional oversight, approval, and monitoring requirements.
Partial — 2 pointsCriticality is informally understood but has not been translated into differentiated governance controls.
Not Satisfied — 0 pointsNo criticality classification exists. All AI functions are governed uniformly regardless of consequence.
Question 3
Is there a defined mechanism for detecting when the AI system is operating outside its intended decision scope — and a clear protocol for escalation when this occurs?
Satisfied — 3 pointsOut-of-scope detection mechanisms exist (technical and/or procedural), with a tested escalation pathway and documented response protocol.
Partial — 2 pointsEscalation protocols exist but out-of-scope detection relies primarily on manual oversight rather than systematic monitoring.
Not Satisfied — 0 pointsNo systematic out-of-scope detection. The system could drift beyond its intended scope without triggering any structured response.
AI Deployment Governance Gate — Results

Your AI Decision Risk Profile

0
/ 81
Calculating...

Processing your responses...

Layer-by-Layer Breakdown

Decision Criticality & Scope
0/9
Explainability & Transparency
0/9
Human-in-the-Loop Design
0/9
Data Quality & Bias Risk
0/9
Operational Readiness
0/9
Governance & Accountability
0/9
Clinical Safety & Error Tolerance
0/9
Regulatory & Compliance Alignment
0/9
Post-Deployment Monitoring
0/9

Indicative Findings

  • Calculating recommendations...

This self-assessment demonstrates the methodology. The full governance gate goes further.

A facilitated HealthElevate AI Governance Gate engagement includes independent validation, layer-by-layer written analysis, risk mitigation mapping, and a board-ready Go / Conditional Go / No-Go brief with explicit approval conditions.

Rapid Brief
£1,200
Full Diagnostic
£4,000
Request a Facilitated Governance Gate