Worked Examples
Phygital Loop
Three agent types, four instrument families, two feedback instruments, typed decisions at arbitration points.
Reading Example 1
| Code | Reads As |
|---|---|
HA[L3] | Human Agent, authority level 3 (can override any gate) |
DA[L1] | Digital Agent, authority level 1 (executes only) |
ST-R01 | Settlement Token Recorder #1 (logs, no control) |
GT-C01 | Governance Token Controller #1 (adjusts continuously) |
IT-G01 | Incentive Token Gate #1 (token required to proceed) |
VFL-DX01 | VFL Exclusive Decision #1 (one path chosen) |
VV-G01 | Verifiable Value Gate #1 (pass/fail on value criteria) |
LR-C01 | Learning Controller #1 (adjusts agent parameters) |
[P] | Connection flows on Pass |
[F] | Connection flows on Fail |
[E:receipt] | Connection fires on receipt event |
[T:cycle] | Connection fires on time cycle |
Three Decision Points
The diagram exposes three critical choice nodes:
- Signals agree? (
◇X) — Do physical and digital instruments produce consistent evidence? - Who arbitrates? (
◇X) — Protocol rules, human override, or market mechanism? - Value sufficient? (
VV-G01) — Does the outcome meet the gate threshold?
Human + Orchestrator
One human, one orchestrator agent (Nav as priorities-nav.md would express it), local bodies (laptop, phone), a DePIN sensor on the shared substrate (e.g. GEODNET RTK), and a verifier agent that only passes receipts when evidence clears the gate. Same symbol library as example 1 — no new primitives.
Reading Example 2
| Code | Reads as |
|---|---|
HA[L3] | Human Agent at L3 — can override any gate |
OA[L2] | Nav as Orchestrator Agent at L2 — routes within scope |
PA[L1] | Physical Agent at L1 — laptop, phone, DePIN sensor execute only |
DA[L2] | Digital Verifier Agent at L2 — decides whether outputs pass |
TS-T01 | Universal Machine Time Sync Transmitter — anchors readings (see Economy of Things) |
VV-G01 | Verifiable Value Gate — receipt only if value clears threshold |
ID-G01 | Identity Gate — each body proves identity before participating |
learning loop arc | Feedback crosses back to the human — same diagram, tighter behaviour next cycle |
The Key Pattern
Today the human owns the laptop and phone bodies directly. Tomorrow the same drawing absorbs more DePIN-owned instruments under the same ID-G01 + VV-G01 contract.
The topology does not change — the substrate underneath grows.
That is publishing the monomyth in operating form, not in prose.
Questions
Can you read a diagram built by someone in a different discipline and correctly execute it?
- In Example 1, what happens if physical and digital instruments disagree? Which decision node (
◇Xor◇X) catches that — and can a protocol-level rule ever override human authorityHA[L3]? - In Example 2, the learning loop crosses back to the human. How would you encode "stop learning if the human has been overridden twice in a row"?
- If you needed to add a fourth agent type (Collective Agent?) or a new connection type (consensus flow?), where would you add it without breaking the existing notation?
Next: See Context — related concepts and connections.