What an AI Risk Disposition actually contains
AI Committees keep approving systems they can't defend later. The Risk Disposition memo is the artifact that fixes this. Here is what goes into one, section by section, with examples from a real assessed system.
Most AI Committees we talk to are stuck in the same loop. A product team brings an AI feature for review. Slides happen. The committee asks reasonable questions: what data does it use, what could go wrong, who is responsible. Reasonable answers come back. The minutes record “approved with conditions”. Six months later, the model has been swapped, two tools have been added, and the conditions are nowhere in the sign-off document because the sign-off document does not exist as a structured artifact. It exists as a paragraph in a meeting note.
When the regulator, the customer's procurement team, or your own auditor asks “why was this approved, and what controls were promised”, the answer is a slide deck and three emails. That is not a defensible record. It is a memory aid.
A Risk Disposition is the decision, the rationale, the controls that were promised, the evidence that was missing, the risk that was accepted, and the conditions under which the decision must be revisited — written down once, in one place, in a form a stranger can read.
Drel ships this artifact as the primary output of every assessment. This piece walks through what it contains, section by section, and what each section should look like when it is doing its job. Examples are taken from the public demo disposition for an enterprise procurement agent.
Why this matters now
The category of “AI system you are responsible for in production” is growing faster than committees can review it. Copilot Studio agents, RAG pipelines over regulated data, MCP servers connecting external tools to internal models, third-party AI features inside SaaS your company has already procured. Each of these is a decision your committee will be asked to defend.
Several frameworks now expect this defensibility to be written down. ISO/IEC 42001 requires an AI management system with documented risk treatment plans. EU AI Act Article 9 requires a documented risk management system for high-risk systems, with the identified risks, the controls applied, and the residual risk justification. OWASP Agentic Top 10 expects you to know which agentic risks apply to your system and which controls close them. NIST AI RMF's Map and Manage functions effectively describe a disposition memo without using the word.
None of these frameworks tell you what the artifact looks like. They tell you it must exist. The point of the Risk Disposition is to be that artifact, in a shape that all of them can read.
The seven sections of a disposition memo
A disposition that holds up under review has seven sections. They are not interchangeable, and skipping any of them tends to be the moment a procurement reviewer asks “and what about…”.
Anatomy of an AI Risk Disposition memo
Section 1 (Decision) is the artifact. Sections 2–6 are its evidence base. Section 7 is the record of who accepted it.
- Decision — proceed / conditional / restricted pilot / hold / decline.
- Rationale — why this decision, in two or three sentences.
- Required controls — what must be in place, grouped by lifecycle gate.
- Residual risk — what is being accepted, by whom, under what condition.
- Evidence gaps — what we don't yet know, and how we will close it.
- Re-assessment triggers — what changes invalidate this decision.
- Sign-off log — who approved, in what role, when.
Each of these has a wrong version that is far more common than the right one. The rest of this piece walks through them one by one.
1. The decision
A decision is one of a small, fixed set of choices. We use: proceed, conditional, restricted pilot only, hold, or decline. The point of having a fixed enum is that “approved with conditions” cannot be confused with “approved” later. They are different states.
Restricted pilot, in particular, is the state most AI features should live in for longer than teams want to admit. It says: this system may operate, but inside a stated boundary — a user group, a data class, a workflow — and the path to full production is itself a list of conditions.
2. The rationale
The rationale is two or three sentences. It says why, in terms a reader who was not in the meeting can follow. It names the headline risk, the headline mitigation, and the headline residual exposure.
Bad rationale: “The system was reviewed and found acceptable for restricted pilot.” This says nothing.
Good rationale, from the demo case: “The procurement agent may operate in a restricted pilot scoped to non-binding supplier analysis (no contract approvals, no email send). The agent's retrieval surface is limited to non-confidential policy excerpts, and high-value decisions require a human approval boundary. Residual exposure to prompt injection from supplier-submitted documents is accepted on the condition that the boundary is enforced at the model gateway, not at the UI.”
That paragraph names the scope, the dominant mitigation (boundary at the gateway), and the residual exposure that everyone has consciously agreed to live with.
3. Required controls, by lifecycle gate
Controls without a deadline are aspirational. Controls without an owner are orphaned. Controls without a lifecycle gate get re-litigated every meeting because no one remembers whether they were supposed to be in place by pilot or by production.
The Drel disposition splits controls into three gates:
- Before pilot expansion — must be in place before the pilot widens beyond its initial scope.
- Before full production — must be in place before the system serves the unrestricted user population.
- Ongoing — must remain in place for as long as the system operates.
Each control row carries an identifier (so it can be cited from a ticket, a code review, or a regulator letter), an owner role (not a person — people change roles), a deadline, a status, a framework tag, and a verification method that says how we will know it is true.
Example — required controls, procurement agent (restricted pilot disposition)
| ID | Control | Owner role | Gate | Verification method |
|---|---|---|---|---|
| C-01 | Human approval boundary enforced at model gateway before any supplier communication is sent | Security Architecture | Before pilot | Architecture review + boundary test with mock send attempt |
| C-02 | Retrieval scoped to non-confidential policy excerpts only — no access to contract data or pricing | Data Engineering | Before pilot | Access control test: query for contract data returns empty |
| C-03 | Prompt injection red-team exercise covering supplier-submitted document inputs | Security Architecture | Before production | Red-team report with findings and mitigations attached |
| C-04 | Tool call log retained for 90 days with full invocation detail (tool id, parameters, user session) | Platform Engineering | Before production | Log sample showing all required fields populated |
| C-05 | Quarterly review of retrieval scope against approved data classification | AI Governance | Ongoing | Review schedule + last-run report |
| C-06 | Re-assessment triggered if human approval boundary is removed or scope expanded beyond supplier analysis | AI Governance | Ongoing | Trigger registered in disposition; re-review process documented |
4. Residual risk acceptance
Every disposition has residual risk. The question is whether it has been named, by whom, and on what condition. “Risk accepted by the AI Committee” is not a residual risk record; it is a sentence about a meeting.
A residual risk record names the risk in plain language, the role accepting it (Business Owner, CISO delegate, DPO depending on the risk class), and the acceptance condition — the qualifier without which the acceptance falls away. Example: “Risk of policy drift between vendor email drafts and the approved tone-of-voice guide is accepted by the Head of Procurement on the condition that drafts continue to require human review before send. If the human review boundary is removed, this acceptance is invalidated and triggers re-review.”
The acceptance condition is the bridge between the residual-risk section and the re-assessment trigger section. They quote each other.
5. Evidence gaps and closure plans
A good disposition is honest about what it does not know. Every assessment has evidence the reviewers wanted but did not get — a third-party model card with the training data caveat we wanted, a red-team report we ran ourselves but didn't fully cover, telemetry that doesn't yet exist.
Hiding these gaps creates the worst failure mode of all: the disposition reads as more confident than the team actually is. The honest move is to surface the gap, attach a closure plan, and put a date on it.
Evidence-gap rows live alongside required-control rows for a reason: a gap is just a control whose verification method is “still being captured”. They graduate into controls as the evidence arrives.
6. Re-assessment triggers
AI systems are not static. The model changes, the tools change, the data changes, the user population changes. Any of these can invalidate the disposition that was signed off. The re-assessment trigger section says, in advance: here is the list of changes for which this disposition no longer holds, and a re-review is required before continuing.
A trigger has a type — model change, tool added, data source change, autonomy increase, user group expansion, vendor change, production rollout, scope change — and a condition in plain language. When a trigger fires, it does not stop the system; it forces a re-disposition. The committee's job moves from “approve once” to “maintain a register of triggers”, which is exactly what an AI management system under ISO/IEC 42001 expects of you.
How a re-assessment trigger works
Trigger fires during Pilot
Example: “A new tool (email send) was added to the agent's tool manifest.” This matches the registered trigger condition. The current disposition is invalidated.
Re-review opens
The team returns to the Threat model and Control plan steps for the new tool. A new Disposition is issued. The old one is archived with a “superseded by trigger” note.
The disposition is not invalidated by the trigger — it is superseded. The old record stays in the audit pack. The new one references it.
A disposition without triggers is a disposition that decides one thing once. A disposition with triggers is a disposition that knows when to wake up.
7. The sign-off log
Sign-offs are role-based, not person-based. The roles we surface by default are Security Architecture, AI Governance, DPO, Business Owner, and CISO delegate. A disposition can be approved by some of them, declined by another, and held by a fourth. The log shows each one explicitly: status, signer, date, optional comment.
The reason for role-based sign-off is rotation. People leave; roles persist. Two years later, “DPO approved on 2026-04-12” is more useful than “S. Vermeer approved on 2026-04-12” because the next DPO inherits the record.
The audit pack: how a disposition is read later
A disposition is not a one-time artifact. It lives in an audit pack — the bundle a procurement reviewer, an internal auditor, or a regulator will read months after the decision was made. The audit pack is the disposition memo plus the source artifacts that fed it (functional spec, technical design, vendor proposal, architecture diagram, MCP manifest, prompt spec, ADRs), the change sets between versions, and the evidence items each control points to.
The disposition memo is the front matter of the pack. It is what gets read. The rest is what gets cited from it.
Five mistakes we keep seeing
- One disposition per project. Wrong unit. The unit is the AI system, not the project. A project may ship many systems; one system may span projects.
- Controls without verification methods.“Implemented” is not a state; it is a claim. The state is “verified via <named artifact>”.
- Residual risk that names no acceptor.“The committee accepted” is not an acceptor. A named role is.
- Re-assessment triggers omitted. The system will change. The disposition that does not anticipate change is a one-shot decision.
- The audit pack lives nowhere. The disposition is in Confluence, the controls are in Jira, the evidence is in SharePoint. None of them know about each other. The audit pack must be a single bundle, versioned together.
The decision the disposition makes possible
The point of writing a Risk Disposition is not the document. The point is the conversation it makes possible — between the AI Committee and the product team today, and between your future self and a regulator who has never seen this system before.
If you can hand them one artifact that names the decision, the conditions, the accepted risk, and the triggers, you have done the work. If the answer is a deck and three emails, you have not.
See a real disposition
The procurement-agent disposition referenced throughout this piece is public, ungated, and downloadable as Markdown. Read it, share the URL with your committee, or start a free assessment of your own system.
Field notes
Get new posts in your inbox
AI security review, OWASP Agentic Top 10, ISO 42001 evidence, and what AI Committees actually need. No cadence promises — we publish when there's something worth reading.
A note on scope: Drel reviews assessed systems against documented architecture, configuration and intent. It does not ingest live telemetry from production environments. Dispositions reflect the assessed system at the time of review and the re-assessment triggers that govern when the disposition must be revisited.