Skip to main content

Context Flow

Three densities, one vocabulary, one closed loop. This page explains the engineering — how meaning travels from a paragraph in /docs/ to a wire-format token between two agents over Convex, and back again as a receipt.

A P&ID is dense because the reader holds the legend sheet in working memory. Same contract here. CONTEXT.md is the legend at human density. The DML is the legend at wire density. Without the legend, the symbols are noise. With it, seven tokens replace a paragraph at ~40:1 compression.

The Three Densities

Each layer inherits the level above and compresses for a different reader. Each layer assumes the previous level's context is loaded.

1× density

Verbose narrative

/docs/ · /meta/ · /src/pages/

Paragraphs, examples, links. Error-corrected for human skim — redundancy IS the feature, not waste.

Reader: anyone learning the platform

~10× density

Human shorthand

/CONTEXT.md

One sentence per term, canonical name, avoid aliases. The shared mental model — read in 5 minutes, decoded for life.

Reader: Wik, planning sessions, grill-with-docs

~100× density

DML wire format

_llm-meta-language.md

@AGENT:DA|@FUNC:G|@COND:[F] — token-tight, decodable by any agent that holds the legend. Pays for itself on every Convex hop.

Reader: agent-to-agent over Convex

Then below the wire: pure binary. Same meaning the silicon ever sees, but at that level no human ever reads it.

The Closed Loop

Compression flows downward at session boot. Receipts flow upward at session close. The full loop compounds — every receipt sharpens semantic memory; every doc edit re-ranks the next pack.

The loop has three feedback edges (dotted): semantic memory writes back to the seeds (pack composition), the canonical terms (CONTEXT.md), and the wire codes (DML). Drift between those three layers is the kill signal — the legend has stopped matching the territory.

Cross-Domain Conflicts

The same word means different things in different contexts. CONTEXT.md picks a canonical reading; the cascade enforces it.

TermAI domainBusiness domainCrypto domainCanonical here
AgentAutonomous LLM-driven processA person acting on behalf of anotherKey signer / walletAI sense; qualify person as Human Agent (HA)
MemoryPersistent agent state across sessionsRecall, knowledgeRAM / diskAI sense; always qualify tier (working / episodic / semantic / procedural)
TokenText unit consumed by the modeln/aDigital asset (ERC-20, NFT, SPL)Always qualify: LLM token vs crypto token
LoopGeneric feedback cycleCustomer journey loopn/aVVFL when unqualified; qualify code-loops as for/while
PipelineCI/CD or ETLSales funneln/aAvoid bare; use Workchart for our spine+task system

Without the canonical pick, a paragraph that mentions "agent" forces every reader to disambiguate. With it, the meaning resolves on first read.

When to Use Which Layer

NeedRead thisWhy
Learn the platform from scratch/docs/Verbose, navigable, error-corrected
Resolve a naming question mid-session/CONTEXT.mdOne sentence per term + avoid list
Boot an agent for a new sessionPack from context-graph pack --agent=<name>PageRank-ranked top-N pages biased to the agent's profile
Send a status to another agent over the wireDML token from _llm-meta-language.mdToken-efficient, decodable by any agent that holds the key
Audit what happened in a session/.invisible/context/receipts/Structured proof of decision events
Decode a pack line or A&ID symbolDML decoderThe legend sheet

Two Audiences, One Vocabulary

Pocock's grill-with-docs ends at human shorthand because the loop is human ↔ LLM ↔ human. The Dreamineering loop adds agent ↔ agent over Convex, where token cost is real on every hop. That extra hop demands the wire format.

human ←→ agent Pocock's loop (CONTEXT.md sufficient)

human ←→ agent ←→ agent Dreamineering (CONTEXT.md + DML)
←→ agent
←→ agent

The CONTEXT.md term and the DML code point at the same canonical concept. Add a term in one place, add the code in the other. Drift is the kill signal.

Where to Update What

ChangeFileTrigger
New canonical term/CONTEXT.mdgrill-with-docs session resolves a term
New DML code/.invisible/context/_llm-meta-language.mdA new term needs wire transmission
New domain dialect/docs/{domain}/.../glossary.mdA domain's internal language emerges
Naming rule for a new artifact type/docs/standards/naming-standards.mdNew kind of file/folder appears
New A&ID symbolSymbol libraryDiagrammatic notation evolves
Re-rank pack seeds/scripts/context-graph/profiles/*.jsonAgent attention shifts
Receipt schema/.claude/rules/agent-receipt-schema.mdDecision-event structure changes

Why This Matters

A platform that compresses without a decoder produces gibberish at scale. A platform that decodes without compressing burns tokens on every wire hop. The cascade above is the engineering answer: redundancy where it earns trust, density where it earns bandwidth, one vocabulary across both.

The cost of the system is one file you have to maintain — CONTEXT.md for humans, mirrored by _llm-meta-language.md on the wire. The benefit is that every agent session starts knowing what every other agent meant the last time it spoke.

Questions

How do you know the legend still matches the territory — and what fires when it doesn't?

  • If a docs paragraph and a CONTEXT.md term disagree, which wins — and how do you detect the disagreement before it ships?
  • When does adding redundancy (a longer sentence) compound trust faster than removing it (a shorter symbol)?
  • Which audience pays the highest token cost in your current loop — and is the compression you offer them earning its place?