Skip to main content

Business Logic Document

If the work doesn't have clear logic and defined success criteria, don't build an AI system around it. You end up designing the business logic and the AI simultaneously — that is the most reliable path to failed transformation.

The Business Logic Document is the prerequisite. It distills what a senior expert knows — the rules, the patterns, the exceptions, the non-negotiables — into explicit logic an AI agent can follow. Not a process map. Not a policy document. The actual decision logic: when this situation occurs, apply this rule, unless this exception applies, in which case do this instead.

Every AI system that performs at senior quality has this document underneath it. Every AI system that underperforms is operating without it.


0. Framing

QuestionAnswer
Which workflow is this logic for?[specific workflow from Constraint Map]
Who is the primary knowledge source?[name — the senior expert whose logic gets encoded]
Who validates the output?[name — who signs off that the logic is correct]
What is the success standard?[measurable — what does "correct" look like for this workflow]
What does a wrong output cost?[error consequence — sets the quality floor]
When was this workflow last reviewed by someone with authority to change it?[date]

1. Workflow Definition

Define the workflow before defining the logic within it. Be precise about inputs, outputs, and boundaries.

ElementDescription
TriggerWhat starts this workflow? What event or request initiates it?
InputsWhat information is required before the workflow can begin? What format?
OutputWhat does the workflow produce? What is delivered, to whom, in what form?
Time constraintHow quickly must the output be produced?
Quality standardWhat does good output look like? What does a failure look like?
VolumeHow often does this workflow run?
Frequency of exceptionsWhat % of runs encounter something unusual?

2. Standard Logic

Document the rules that govern the majority of cases (the ~80% that are not exceptions).

Use this structure for each rule: WHEN [situation] → THEN [action] → BECAUSE [reason / policy]

#RuleWHENTHENBECAUSE
1
2
3
4
5
6
7
8

Capture the reason. Rules without reasons are brittle. When the reason changes, a rule without a reason cannot be updated correctly. When the AI encounters a novel situation, the reason tells it which direction to default.


3. Decision Points

Map the forks where logic branches. These are the moments where input determines which path is taken.

Decision PointQuestion Being AskedOption AOption BOption CDeciding Factor

For each decision point, identify whether the deciding factor is:

  • Data-based (a measurable threshold, a field value, a calculation result) → AI can handle
  • Judgment-based (pattern recognition, relationship history, qualitative read) → human in the loop

4. Exception Library

Document the cases that fall outside standard logic. Each exception is a rule waiting to be written.

#Exception DescriptionFrequencyWhat Standard Logic Would DoCorrect HandlingSource
1Rare / Occasional / Common[initials of person who knows this]
2
3
4
5

Completeness target: Review the last [X months] of this workflow's outputs. Every case that required a supervisor's intervention or produced a rework loop is a candidate exception. Capture all of them before build begins.


5. Non-Negotiables

The hard boundaries. Rules that cannot be violated regardless of any other logic.

#Non-NegotiableWhy It Is FixedWho Owns It
1[regulatory / legal / brand / relationship]
2
3

Non-negotiables become the AI system's hard stops. They are not judgment calls. They are constraints. The AI must escalate to a human any time a non-negotiable is at risk.


6. Escalation Logic

Define exactly when the AI system must stop and pass to a human. Be specific — vague escalation triggers produce systems that either escalate too often (useless) or too rarely (dangerous).

TriggerEscalation ReasonWho Receives ItResponse Time Required
Confidence below [X%] on outputInsufficient pattern match[role][time]
Exception type not in libraryNovel case requiring judgment[role][time]
Non-negotiable at riskHard stop — no AI decision[role]Immediate
Output value above $[X] thresholdHigh-stakes decision[role][time]
Client flag / sensitivity markerRelationship risk[role][time]

7. Validation Protocol

How do you know the logic is correct before the system goes live?

StepActionWhoPass Criteria
1. Senior expert reviewKnowledge source reads every rule and confirms accuracy[name]Zero corrections needed on re-read
2. Historical case testRun the logic against [X] past cases; compare AI output to actual outcomes[name][X]% match rate
3. Exception handling testFeed the top 10 exceptions from Section 4; verify correct escalation[name]10/10 escalate correctly
4. Non-negotiable breach testConstruct cases where each non-negotiable is at risk; verify hard stop[name]100% hard stop, 0% false pass
5. Edge case stress testConstruct ambiguous inputs; verify escalation rather than confident error[name]Escalates on ambiguity

The logic is not complete until the validation protocol passes. Deploying before validation is designing the business logic and the AI simultaneously — which is the failure mode this document exists to prevent.


8. Maintenance Protocol

Logic changes when the business changes. Define how updates are captured and deployed.

Update TypeTriggerProcessOwnerCadence
New exception discoveredAI output reviewed and found incorrectPost-mortem → exception added → re-validated[role]Ongoing
Policy changeRegulation, client contract, or business rule changesPolicy update → rules reviewed → re-validated[role]On change
Quality threshold changeBusiness standard revisedThreshold update → historical test re-run[role]Quarterly
New workflow variant addedBusiness expands into new territoryNew rules section added → full validation[role]On change

Every update to this document must go through Step 4 (non-negotiable breach test) at minimum before deployment. Business logic that passes validation once is not validated forever.


Context

  • Business rules — Formal definition of the logic this document captures
  • Decision table — Structured format for complex rule sets with multiple conditions
  • Knowledge engineering — The discipline of encoding expert knowledge into machine-usable form

Questions

If your senior expert left tomorrow, how much of what they know exists in a form an AI system — or a new hire — could follow?

  • Which rule in this workflow is followed because "that's how we've always done it" with no documented reason?
  • Where does your standard logic fail — and is that failure mode currently in your exception library?
  • What is the cost of a wrong output in this workflow — and does your escalation logic catch it before the cost materialises?