Context Architecture
AI without organizational context performs like an entry-level hire with no experience. Context is the infrastructure.
The difference between an AI system that performs at junior quality and one that performs at senior expert quality is not the model. The models are largely equivalent. The difference is context: thirty-five years of transaction history, institutional patterns, policy documents, exception handling logic, and the tacit knowledge that a senior person carries in their head. Loading that context is the core work of AI transformation — not model selection.
This blueprint maps what needs to be loaded, where it lives, what quality it is in, and what work is required before any AI system can operate at the quality standard the business requires.
0. Framing
| Question | Answer |
|---|---|
| Which workflow or agent is this context architecture for? | [specific workflow from Constraint Map] |
| Who is the "senior expert" this AI must approximate? | [name or role — their judgment is what gets encoded] |
| How long has this person been doing this work? | [years — this determines the depth of context needed] |
| What quality standard defines "good enough"? | [measurable — error rate, client satisfaction, speed] |
| What is the acceptable quality floor? (60% AI efficacy may cost more than it saves) | [threshold below which the system should escalate to human] |
1. Context Inventory
Map every type of context the AI system needs to perform the workflow at the required quality level.
Institutional Knowledge
| Context Type | Description | Where It Lives | Format | Quality | Volume |
|---|---|---|---|---|---|
| Transaction history | Past deals, clients, outcomes, patterns | [CRM / files / memory] | [structured / unstructured] | [clean / messy / partial] | [rows / documents] |
| Policy documents | Rules governing decisions, exceptions, non-negotiables | [wiki / drive / email] | [doc / email / informal] | [current / outdated / missing] | [count] |
| Institutional logic | How decisions get made — the unwritten rules | [in people's heads] | [tacit] | [not yet encoded] | — |
| Exception library | Edge cases and how they were handled | [tribal knowledge / email threads] | [unstructured] | [partial] | — |
| Terminology | Domain-specific language, acronyms, internal shorthand | [glossary or none] |
Operational Context
| Context Type | Where It Lives | Format | Quality | Action Needed |
|---|---|---|---|---|
| Client/customer profiles | ||||
| Product or service catalogue | ||||
| Pricing rules and discount logic | ||||
| Approval thresholds and escalation rules | ||||
| Regulatory and compliance constraints | ||||
| Competitor and market reference data |
2. Quality Assessment
Rate each context source on two dimensions: completeness and reliability.
| Context Source | Completeness (% of what's needed exists) | Reliability (% accurate / current) | Gap |
|---|---|---|---|
| [%] | [%] | [what's missing or stale] | |
| [%] | [%] | ||
| [%] | [%] |
Minimum viable context threshold: What is the minimum context completeness at which the AI system produces acceptable quality? Below this threshold, the system should not be deployed — it performs like a junior hire with no experience, which is worse than having a human do the work.
Minimum threshold: [X]% completeness, [X]% reliability
3. Context Loading Plan
For each gap identified above, define the work required to load it.
| Context Gap | Method to Close | Owner | Effort | Priority |
|---|---|---|---|---|
| Tacit knowledge in senior expert's head | Structured interview + business logic document | [name] | [days] | HIGH / MED / LOW |
| Transaction history in unstructured format | Data extraction + cleaning + structuring | [name] | [days] | |
| Policy documents scattered across drives | Audit + consolidation + versioning | [name] | [days] | |
| Exception library — tribal knowledge | Workshop + documentation sprint | [name] | [days] | |
| Outdated records | Verification sprint + update cycle | [name] | [days] |
Total context loading effort: [X days / weeks]
This effort is part of the build cost in the AI ROI Model. It is commonly underestimated by 50–80% because leaders do not see "loading context" as visible work.
4. Context Maintenance Protocol
Context loaded once degrades. Institutional logic changes. Policies update. New exceptions emerge. Define how context stays current.
| Context Type | Update Trigger | Update Method | Owner | Cadence |
|---|---|---|---|---|
| Policy documents | Policy change | Version-controlled update | [role] | On change |
| Transaction history | New deal / client | Automated or manual logging | [role] | Continuous |
| Institutional logic | New exception handled | Post-mortem → exception library update | [role] | After each edge case |
| Pricing and product | Change in offering | Catalogue update | [role] | On change |
| Regulatory constraints | Regulation change | Compliance audit → update | [role] | Quarterly |
Staleness indicator: When was the context last verified? Any source not verified within [X months] should be flagged as potentially unreliable.
5. Human Escalation Boundaries
Context architecture defines where the AI operates. It also defines where it must stop and escalate to a human.
| Scenario | Why AI cannot handle it | Escalation Path |
|---|---|---|
| Novel situation with no historical precedent | No context exists to pattern-match against | → Senior expert |
| Exception outside defined rules | Business logic document has no rule for this case | → Exception handler |
| Decision above approval threshold | Threshold defined in policy; human must sign off | → Approver |
| Emotionally sensitive client interaction | Relationship context requires human judgment | → Relationship owner |
| Regulatory edge case | Compliance risk requires human accountability | → Legal / compliance |
The escalation boundaries are not a failure of the AI system. They are the architecture. They define the job of every senior person post-transformation.
6. Context Quality Gates
Before the AI system goes live, verify:
- Transaction history covers at least [X months / X transactions] — enough for pattern recognition
- Policy documents are current — verified within the last [X months]
- Business logic document exists and has been validated by the senior expert who will sign off on AI output
- Exception library covers the top [X] edge cases — defined by reviewing last [X] months of exceptions
- Escalation paths are named — every boundary has a named human
- Context completeness ≥ minimum viable threshold defined in Section 2
- Maintenance protocol is assigned — every context type has an owner and update trigger
Any unchecked box is a deployment blocker, not a nice-to-have.
Context
- Constraint Map — Identifies which workflow this context architecture serves
- Business Logic Document — The structured encoding of tacit institutional knowledge
- AI-Native Future State — The workflow the context enables
- Process Mapping — Map the workflow before architecting the context it needs
- AI ROI Model — Context loading effort feeds the cost model
Links
- Knowledge management — Systematic approach to capturing, storing, and applying organizational knowledge
- Tacit knowledge — The know-how that lives in expert heads, not documents
- Retrieval-augmented generation — The technical pattern for loading context into AI systems at query time
Questions
What context does your most senior expert carry in their head that does not exist in any document — and what happens to your AI system if they leave before it is loaded?
- Which of your context sources is the most out of date — and what decision is currently being made against stale information?
- What is the minimum viable context threshold below which your AI system is worse than a junior hire?
- When the senior expert is no longer available to validate AI output, how do you know the context is still accurate?