Standard Data Unit
What is the unit of organizational learning?
Not the dashboard. Not the metric. The atom — the smallest thing you could put a token on that still means something.
The IntentTrace
Every feedback loop runs the same pattern: something set an intention, something acted, something measured the outcome, and the result either raised or lowered the floor. That sequence — sealed as a single record — is the unit.
IntentTrace = { intent_ref → action_hash → outcome → delta → verified }
| Field | What it holds | Why it matters |
|---|---|---|
intent_ref | What authorized this — a goal, plan, or standard | Without intent, action has no frame |
action_hash | What actually happened — a commit, completion, or artifact | Without action, intent is a wish |
outcome.score | How well the action met the standard | Without measurement, improvement is guesswork |
outcome.standard_ref | Which standard was the gauge | Without a named standard, scores are incomparable |
outcome.delta | Change versus the prior run | Without delta, you can't prove the loop improved |
verified | Did it pass the gauge? | The seal. Verified traces compound. Unverified ones don't. |
One IntentTrace = one iteration of the loop. Sealed.
The Insight
Most organizations track outputs. Shipping frequency. Task count. Lines of code. These are action without intent, or measurement without a named standard. They can't compound.
The IntentTrace binds all five. When a run completes with all five fields populated, it contributes to a body of verified evidence. That body is the capability. The density of verified traces in a domain is demonstrable competence — something you could prove to an auditor, a partner, or an agent inheriting the work.
What Maps to IntentTrace
Any event where an agent acted under authorization maps to this shape:
| Event Type | intent_ref | action_hash | Standard used |
|---|---|---|---|
| Skill execution | Skill name or PRD | Receipt file path | Skill quality gates |
| Plan task completion | Plan or project ID | Task ID | Plan quality criteria |
| Code merge | PRD ID in PR body | Commit SHA | Code review standard |
| Content publication | Brief or editorial standard | Article path | Content quality gates |
| Infrastructure check | Monitoring target | Health report | SLA threshold |
The event types are different. The shape is the same. That's what makes aggregation possible.
Connection to Verification
A verification layer proves: "Did the agent do what was authorized?" That question can only be answered mechanically if the IntentTrace exists. Without intent_ref, there's no authorization record to compare against. Without action_hash, there's no proof of what ran. Without standard_ref, there's no gauge to verify against.
The IntentTrace IS the instrument. The verification layer is the engineer reading it.
The Floor Principle
Verified traces don't disappear when a session ends. They become persisted state — the floor the next agent builds on. A domain with 50 verified IntentTraces has a compounding advantage over a domain with none. The agent inheriting the work can see what standards were applied, what delta was achieved, and what ran without manual intervention.
This is how loops compound. Not through documentation. Through sealed, verified iterations.
Context
- Naming Standards — IntentTrace as a formal ontology entry; naming as measurement infrastructure
- Verifiable Intent — The protocol IntentTrace instruments
- VVFL — The loop IntentTrace measures — one trace per iteration
- Standard Notation — Pack format and A&ID symbol language for encoding traces
- Scoreboard — Where density of verified traces becomes visible performance
Questions
What is the oldest loop in your system that runs with no standard_ref — no named gauge against which outcomes are compared?
- If you removed all unverified traces, what percentage of your claimed progress would remain?
- What would it take to promote a practice into a standard — and therefore make future traces comparable across agents and time?
- When the next agent inherits this domain, what evidence exists that the loop improved under the previous agent?
- Which gaps in your IntentTrace coverage are naming failures — and which are genuine unknowns?