Skip to main content

Context Architecture

AI without organizational context performs like an entry-level hire with no experience. Context is the infrastructure.

The difference between an AI system that performs at junior quality and one that performs at senior expert quality is not the model. The models are largely equivalent. The difference is context: thirty-five years of transaction history, institutional patterns, policy documents, exception handling logic, and the tacit knowledge that a senior person carries in their head. Loading that context is the core work of AI transformation — not model selection.

This blueprint maps what needs to be loaded, where it lives, what quality it is in, and what work is required before any AI system can operate at the quality standard the business requires.


0. Framing

QuestionAnswer
Which workflow or agent is this context architecture for?[specific workflow from Constraint Map]
Who is the "senior expert" this AI must approximate?[name or role — their judgment is what gets encoded]
How long has this person been doing this work?[years — this determines the depth of context needed]
What quality standard defines "good enough"?[measurable — error rate, client satisfaction, speed]
What is the acceptable quality floor? (60% AI efficacy may cost more than it saves)[threshold below which the system should escalate to human]

1. Context Inventory

Map every type of context the AI system needs to perform the workflow at the required quality level.

Institutional Knowledge

Context TypeDescriptionWhere It LivesFormatQualityVolume
Transaction historyPast deals, clients, outcomes, patterns[CRM / files / memory][structured / unstructured][clean / messy / partial][rows / documents]
Policy documentsRules governing decisions, exceptions, non-negotiables[wiki / drive / email][doc / email / informal][current / outdated / missing][count]
Institutional logicHow decisions get made — the unwritten rules[in people's heads][tacit][not yet encoded]
Exception libraryEdge cases and how they were handled[tribal knowledge / email threads][unstructured][partial]
TerminologyDomain-specific language, acronyms, internal shorthand[glossary or none]

Operational Context

Context TypeWhere It LivesFormatQualityAction Needed
Client/customer profiles
Product or service catalogue
Pricing rules and discount logic
Approval thresholds and escalation rules
Regulatory and compliance constraints
Competitor and market reference data

2. Quality Assessment

Rate each context source on two dimensions: completeness and reliability.

Context SourceCompleteness (% of what's needed exists)Reliability (% accurate / current)Gap
[%][%][what's missing or stale]
[%][%]
[%][%]

Minimum viable context threshold: What is the minimum context completeness at which the AI system produces acceptable quality? Below this threshold, the system should not be deployed — it performs like a junior hire with no experience, which is worse than having a human do the work.

Minimum threshold: [X]% completeness, [X]% reliability


3. Context Loading Plan

For each gap identified above, define the work required to load it.

Context GapMethod to CloseOwnerEffortPriority
Tacit knowledge in senior expert's headStructured interview + business logic document[name][days]HIGH / MED / LOW
Transaction history in unstructured formatData extraction + cleaning + structuring[name][days]
Policy documents scattered across drivesAudit + consolidation + versioning[name][days]
Exception library — tribal knowledgeWorkshop + documentation sprint[name][days]
Outdated recordsVerification sprint + update cycle[name][days]

Total context loading effort: [X days / weeks]

This effort is part of the build cost in the AI ROI Model. It is commonly underestimated by 50–80% because leaders do not see "loading context" as visible work.


4. Context Maintenance Protocol

Context loaded once degrades. Institutional logic changes. Policies update. New exceptions emerge. Define how context stays current.

Context TypeUpdate TriggerUpdate MethodOwnerCadence
Policy documentsPolicy changeVersion-controlled update[role]On change
Transaction historyNew deal / clientAutomated or manual logging[role]Continuous
Institutional logicNew exception handledPost-mortem → exception library update[role]After each edge case
Pricing and productChange in offeringCatalogue update[role]On change
Regulatory constraintsRegulation changeCompliance audit → update[role]Quarterly

Staleness indicator: When was the context last verified? Any source not verified within [X months] should be flagged as potentially unreliable.


5. Human Escalation Boundaries

Context architecture defines where the AI operates. It also defines where it must stop and escalate to a human.

ScenarioWhy AI cannot handle itEscalation Path
Novel situation with no historical precedentNo context exists to pattern-match against→ Senior expert
Exception outside defined rulesBusiness logic document has no rule for this case→ Exception handler
Decision above approval thresholdThreshold defined in policy; human must sign off→ Approver
Emotionally sensitive client interactionRelationship context requires human judgment→ Relationship owner
Regulatory edge caseCompliance risk requires human accountability→ Legal / compliance

The escalation boundaries are not a failure of the AI system. They are the architecture. They define the job of every senior person post-transformation.


6. Context Quality Gates

Before the AI system goes live, verify:

  • Transaction history covers at least [X months / X transactions] — enough for pattern recognition
  • Policy documents are current — verified within the last [X months]
  • Business logic document exists and has been validated by the senior expert who will sign off on AI output
  • Exception library covers the top [X] edge cases — defined by reviewing last [X] months of exceptions
  • Escalation paths are named — every boundary has a named human
  • Context completeness ≥ minimum viable threshold defined in Section 2
  • Maintenance protocol is assigned — every context type has an owner and update trigger

Any unchecked box is a deployment blocker, not a nice-to-have.


Context

Questions

What context does your most senior expert carry in their head that does not exist in any document — and what happens to your AI system if they leave before it is loaded?

  • Which of your context sources is the most out of date — and what decision is currently being made against stale information?
  • What is the minimum viable context threshold below which your AI system is worse than a junior hire?
  • When the senior expert is no longer available to validate AI output, how do you know the context is still accurate?