Business Logic Document
If the work doesn't have clear logic and defined success criteria, don't build an AI system around it. You end up designing the business logic and the AI simultaneously — that is the most reliable path to failed transformation.
The Business Logic Document is the prerequisite. It distills what a senior expert knows — the rules, the patterns, the exceptions, the non-negotiables — into explicit logic an AI agent can follow. Not a process map. Not a policy document. The actual decision logic: when this situation occurs, apply this rule, unless this exception applies, in which case do this instead.
Every AI system that performs at senior quality has this document underneath it. Every AI system that underperforms is operating without it.
0. Framing
| Question | Answer |
|---|---|
| Which workflow is this logic for? | [specific workflow from Constraint Map] |
| Who is the primary knowledge source? | [name — the senior expert whose logic gets encoded] |
| Who validates the output? | [name — who signs off that the logic is correct] |
| What is the success standard? | [measurable — what does "correct" look like for this workflow] |
| What does a wrong output cost? | [error consequence — sets the quality floor] |
| When was this workflow last reviewed by someone with authority to change it? | [date] |
1. Workflow Definition
Define the workflow before defining the logic within it. Be precise about inputs, outputs, and boundaries.
| Element | Description |
|---|---|
| Trigger | What starts this workflow? What event or request initiates it? |
| Inputs | What information is required before the workflow can begin? What format? |
| Output | What does the workflow produce? What is delivered, to whom, in what form? |
| Time constraint | How quickly must the output be produced? |
| Quality standard | What does good output look like? What does a failure look like? |
| Volume | How often does this workflow run? |
| Frequency of exceptions | What % of runs encounter something unusual? |
2. Standard Logic
Document the rules that govern the majority of cases (the ~80% that are not exceptions).
Use this structure for each rule: WHEN [situation] → THEN [action] → BECAUSE [reason / policy]
| # | Rule | WHEN | THEN | BECAUSE |
|---|---|---|---|---|
| 1 | ||||
| 2 | ||||
| 3 | ||||
| 4 | ||||
| 5 | ||||
| 6 | ||||
| 7 | ||||
| 8 |
Capture the reason. Rules without reasons are brittle. When the reason changes, a rule without a reason cannot be updated correctly. When the AI encounters a novel situation, the reason tells it which direction to default.
3. Decision Points
Map the forks where logic branches. These are the moments where input determines which path is taken.
| Decision Point | Question Being Asked | Option A | Option B | Option C | Deciding Factor |
|---|---|---|---|---|---|
For each decision point, identify whether the deciding factor is:
- Data-based (a measurable threshold, a field value, a calculation result) → AI can handle
- Judgment-based (pattern recognition, relationship history, qualitative read) → human in the loop
4. Exception Library
Document the cases that fall outside standard logic. Each exception is a rule waiting to be written.
| # | Exception Description | Frequency | What Standard Logic Would Do | Correct Handling | Source |
|---|---|---|---|---|---|
| 1 | Rare / Occasional / Common | [initials of person who knows this] | |||
| 2 | |||||
| 3 | |||||
| 4 | |||||
| 5 |
Completeness target: Review the last [X months] of this workflow's outputs. Every case that required a supervisor's intervention or produced a rework loop is a candidate exception. Capture all of them before build begins.
5. Non-Negotiables
The hard boundaries. Rules that cannot be violated regardless of any other logic.
| # | Non-Negotiable | Why It Is Fixed | Who Owns It |
|---|---|---|---|
| 1 | [regulatory / legal / brand / relationship] | ||
| 2 | |||
| 3 |
Non-negotiables become the AI system's hard stops. They are not judgment calls. They are constraints. The AI must escalate to a human any time a non-negotiable is at risk.
6. Escalation Logic
Define exactly when the AI system must stop and pass to a human. Be specific — vague escalation triggers produce systems that either escalate too often (useless) or too rarely (dangerous).
| Trigger | Escalation Reason | Who Receives It | Response Time Required |
|---|---|---|---|
| Confidence below [X%] on output | Insufficient pattern match | [role] | [time] |
| Exception type not in library | Novel case requiring judgment | [role] | [time] |
| Non-negotiable at risk | Hard stop — no AI decision | [role] | Immediate |
| Output value above $[X] threshold | High-stakes decision | [role] | [time] |
| Client flag / sensitivity marker | Relationship risk | [role] | [time] |
7. Validation Protocol
How do you know the logic is correct before the system goes live?
| Step | Action | Who | Pass Criteria |
|---|---|---|---|
| 1. Senior expert review | Knowledge source reads every rule and confirms accuracy | [name] | Zero corrections needed on re-read |
| 2. Historical case test | Run the logic against [X] past cases; compare AI output to actual outcomes | [name] | [X]% match rate |
| 3. Exception handling test | Feed the top 10 exceptions from Section 4; verify correct escalation | [name] | 10/10 escalate correctly |
| 4. Non-negotiable breach test | Construct cases where each non-negotiable is at risk; verify hard stop | [name] | 100% hard stop, 0% false pass |
| 5. Edge case stress test | Construct ambiguous inputs; verify escalation rather than confident error | [name] | Escalates on ambiguity |
The logic is not complete until the validation protocol passes. Deploying before validation is designing the business logic and the AI simultaneously — which is the failure mode this document exists to prevent.
8. Maintenance Protocol
Logic changes when the business changes. Define how updates are captured and deployed.
| Update Type | Trigger | Process | Owner | Cadence |
|---|---|---|---|---|
| New exception discovered | AI output reviewed and found incorrect | Post-mortem → exception added → re-validated | [role] | Ongoing |
| Policy change | Regulation, client contract, or business rule changes | Policy update → rules reviewed → re-validated | [role] | On change |
| Quality threshold change | Business standard revised | Threshold update → historical test re-run | [role] | Quarterly |
| New workflow variant added | Business expands into new territory | New rules section added → full validation | [role] | On change |
Every update to this document must go through Step 4 (non-negotiable breach test) at minimum before deployment. Business logic that passes validation once is not validated forever.
Context
- Constraint Map — Identifies the workflow this document governs
- Context Architecture — The data and institutional knowledge the logic operates on
- AI-Native Future State — The redesigned workflow this logic enables
- Process Modelling — DDPI: Standardisation step — this document is the output
- Quality Assurance — Deming's principle: cease dependence on inspection. Quality built in, not bolted on.
Links
- Business rules — Formal definition of the logic this document captures
- Decision table — Structured format for complex rule sets with multiple conditions
- Knowledge engineering — The discipline of encoding expert knowledge into machine-usable form
Questions
If your senior expert left tomorrow, how much of what they know exists in a form an AI system — or a new hire — could follow?
- Which rule in this workflow is followed because "that's how we've always done it" with no documented reason?
- Where does your standard logic fail — and is that failure mode currently in your exception library?
- What is the cost of a wrong output in this workflow — and does your escalation logic catch it before the cost materialises?