Context Graphs
How do agents know WHY something was allowed, not just WHAT happened?
The Problem
Traditional systems store current state. Your CRM knows the deal closed at 20% discount. It doesn't know:
- Who approved the deviation
- What precedent justified it
- Which policy version applied
- What context informed the decision
Agents need this missing layer. Without it, every decision starts from zero.
See The Mycelium for the full argument.
Definition
A context graph is a queryable record of decision traces—the reasoning that connects inputs to outputs across systems and time.
| Term | What It Is |
|---|---|
| Decision Trace | One record: inputs + context + policy + exception + approver + outcome |
| Context Graph | Many traces linked by entities (accounts, policies, people, precedents) |
| Precedent Link | Connection between current decision and similar past decisions |
The graph grows with every decision. Agents query it for guidance.
Matrix Thinking Link
Matrix thinking makes invisible structure visible to humans.
Context graphs make invisible structure visible to machines.
| Matrix Thinking | Context Graph | |
|---|---|---|
| User | Human | Agent |
| Input | Mental models | Decision events |
| Output | Gaps to fill | Precedent to query |
| Format | Visual (2x2s, tables) | Data (nodes, edges) |
Same principle. Different substrate.
See Matrix Thinking for the full comparison.
Architecture
The Stack
┌─────────────────────────────────────┐
│ AI AGENT │ Queries context, makes decisions
├─────────────────────────────────────┤
│ CONTEXT GRAPH │ Stores decision traces, precedent
├─────────────────────────────────────┤
│ ORCHESTRATION LAYER │ Captures traces at commit time
├─────────────────────────────────────┤
│ SYSTEMS OF RECORD │ Salesforce, Workday, etc.
└─────────────────────────────────────┘
Key insight: The orchestration layer sits in the write path. It sees full context at decision time. Systems of Record only see final state.
What Gets Captured
Each decision trace includes:
| Element | Question | Example |
|---|---|---|
| Inputs | What triggered this? | CAC exceeded threshold |
| Context | What else was considered? | Q4 push, new market |
| Policy | Which rule applied? | Marketing spend policy v2.1 |
| Exception | What deviation? | 15% overspend approved |
| Approver | Who signed off? | CFO, 2024-01-15 |
| Precedent | Similar past case? | Q2 market entry |
| Outcome | What happened? | CAC normalized by Q1 |
Graph Structure
Entities (nodes):
- Accounts, deals, tickets, policies, people, agent runs
Relationships (edges):
triggeredBy,approvedBy,precedentFor,exceptionTo,resultedIn
Query examples:
- "What precedents exist for healthcare discounts > 15%?"
- "Who has approved exceptions to policy v2.1?"
- "What decisions affected this account in the last 90 days?"
Implementation Patterns
Pattern 1: Orchestration Capture
The agent orchestration layer emits decision traces automatically.
Agent receives request
→ Gathers context from multiple systems
→ Evaluates policies
→ Routes exceptions for approval
→ Executes action
→ Emits decision trace ← THIS IS THE CAPTURE POINT
→ Updates Systems of Record
Who does this: Agent platforms (custom or vendor) that instrument the orchestration layer.
Pattern 2: Workflow Instrumentation
Existing workflow tools emit traces at decision points.
Workflow step completes
→ Check: was this a decision point?
→ If yes: capture inputs, policy, approver, outcome
→ Write to context graph
Who does this: Teams adding instrumentation to existing automation.
Pattern 3: Human-in-the-Loop Capture
Humans document decisions as they make them.
Human makes decision
→ Records reasoning in structured format
→ Links to precedent if applicable
→ Names exception if applicable
→ Context graph updated
Who does this: Teams before full automation—building the graph manually.
See Process Optimisation → Process ↔ Context for how this compounds.
The Feedback Loop
Context graphs create a virtuous cycle:
Decision made
↓
Trace captured
↓
Graph updated
↓
Agent queries graph for next decision
↓
Better decision (informed by precedent)
↓
Trace captured...
Each cycle adds to the graph. The system learns.
Without the graph: Same problem solved repeatedly. With the graph: Exceptions become precedent. Precedent becomes policy.
Standards as Glue
Context graphs need standards to work:
| Layer | Standard Needed | Why |
|---|---|---|
| Schema | Common entity definitions | Agents must understand what "account" means |
| Events | Decision trace format | Consistent structure for capture and query |
| Policies | Machine-readable rules | Agents check policies programmatically |
| Identity | Approver verification | Who signed off must be verifiable |
See Standards for the P&C framework.
Ownership Question
Current enterprise software captures decision traces in vendor platforms. Whoever runs the orchestration owns the graph.
| Model | Who Owns the Graph |
|---|---|
| Vendor SaaS | The platform you picked |
| Self-hosted | You (but you maintain it) |
| Decentralized | You, on verifiable infrastructure |
The ABCD stack suggests an alternative:
- AI runs the decision
- Blockchain anchors traces immutably
- Crypto aligns incentives for honest capture
- DePIN distributes storage
This is decision sovereignty: owning the reasoning, not just the data.
On-Chain Traces
When links carry on-chain metadata, every action through the pipe creates an immutable trace. The difference between a decision trace in a vendor's database and one anchored on-chain is falsifiability — the on-chain version cannot be altered after the fact.
| Substrate | Queryable | Auditable | Who Controls |
|---|---|---|---|
| Vendor database | Yes | No | The vendor |
| Self-hosted DB | Yes | Depends | You |
| On-chain | Yes | Yes | Nobody — verify yourself |
Agents learning from an on-chain context graph learn from what actually happened. Not a curated version. Not a log someone could edit. The gauge reads reality.
Verifiable Intent provides the authorization chain. On-chain settlement provides the permanent trace. Together they close the loop: every agent action is bounded by human-approved scope AND recorded as consequence to learn from.
Getting Started
Level 1: Manual Capture
Before you build anything:
- Pick one decision type that repeats (e.g., discount approvals)
- Create a simple log: inputs, policy, exception, approver, outcome
- Link to precedent when you reference past decisions
- Review monthly: what patterns emerge?
Level 2: Workflow Instrumentation
Add capture points to existing automation:
- Identify decision points in workflows
- Add structured logging at each point
- Store in queryable format (database, graph DB)
- Build simple queries: "show me similar past decisions"
Level 3: Agent Integration
Give agents access to the graph:
- Expose graph via API
- Agent queries precedent before deciding
- Agent emits trace after deciding
- Graph grows automatically
Anti-Patterns
| Anti-Pattern | Problem | Fix |
|---|---|---|
| Capturing everything | Noise drowns signal | Only capture decision points |
| No schema | Can't query consistently | Define entities and relationships upfront |
| No precedent links | Isolated traces, no learning | Require "similar to" field |
| Post-hoc capture | Context already lost | Capture at commit time, not after ETL |
Context
Conceptual:
- Matrix Thinking — Human UI for making invisible visible
- The Mycelium — The article selling this idea
- Process Optimisation — Process ↔ Context loop
- Situational Wisdom — What context graphs produce in a decision-maker over time
- Evolution — The biological template: variation + selection + retention. Context graphs are the retention mechanism for machine agents.
Technical:
- AI Agents — The actors that query the graph
- AI Frameworks — Building agent infrastructure
- Standards — The glue that makes graphs queryable
- Intelligent Hyperlinks — The third-generation pipe carries intent; context graphs are the memory layer that makes intent non-naive
Principles:
Questions
Which engineering decision related to this topic has the highest switching cost once made — and how do you make it well with incomplete information?
- At what scale or complexity level does the right answer to this topic change significantly?
- How does the introduction of AI-native workflows change the conventional wisdom about this technology?
- Which anti-pattern in this area is most commonly introduced by developers who know enough to be dangerous but not enough to know what they don't know?