Unixification
What happens when you scale complexity before you standardize interfaces?
Principle
Unixification is the operating discipline of designing small, testable units with stable contracts.
| Rule | Engineering Meaning |
|---|---|
| One job | Each unit has one clear responsibility |
| Small surface | Inputs and outputs are explicit and minimal |
| Stable contract | Interfaces change slowly and deliberately |
| Composable parts | Units can be assembled into larger systems |
| Test in isolation | Failures are attributable and diagnosable |
Operating Use
Apply this sequence when variance and coupling increase:
Define Unit -> Freeze Contract -> Test Isolated -> Compose Incrementally -> Measure Variance
Decision Gate
| Condition | Action |
|---|---|
| High coupling | Break system into explicit units before integration |
| High failure ambiguity | Add contract tests and trace logging |
| Repeated integration rework | Stabilize interfaces before adding features |
Benchmarks
| Benchmark | Signal |
|---|---|
| Contract breakage rate | Should decline each release |
| Mean time to root cause | Should decline over time |
| Integration rework | Should decline over time |
| Stable interface reuse | Should increase over time |
If these do not improve, unixification is being claimed but not practiced.
Questions
What is the smallest unit in your domain that could have a stable contract — and what would that unlock?
- Where does failure ambiguity slow you down most, and which interface is the source?
- What would you need to freeze now so innovation could move one layer up?
- If your system cannot describe its capabilities, who bears the cost of that ambiguity?
Files
Every file has one job. That job is legible from the file's name and first line — no further context required.
A file with one job has:
- One reader (who reaches for it and why)
- One purpose (what it tells that reader)
- One update trigger (what event makes it go stale)
When any of these three splits, the file splits.
Decision Table
| Symptom | Diagnosis | Fix |
|---|---|---|
| Two different readers open it for different reasons | Mixed purpose | Split by reader |
| Some sections update on event A, others on event B | Mixed cadence | Split by update trigger |
| You need to edit the file to understand the file | Instructions mixed with data | Move instructions to an agent config file |
| Historical entries bury current state | Log mixed with dashboard | Separate live state from append-only history |
| Removing any one section would break two different workflows | Coupling | Each workflow gets its own file |
The Three File Types
Most operational files are one of three types. Mixing them is the most common source of confusion.
| Type | One job | Update trigger | Anti-pattern |
|---|---|---|---|
| Dashboard | Show current state | Every time state changes | Adding historical entries that bury live state |
| Log | Record what happened | Append-only — never edit past entries | Editing log entries to "fix" the record |
| Instructions | Tell an agent how to operate | When the process changes | Mixing instructions with the data they describe |
The Test
Read the file's first line. If it does not answer "what is the one job of this file?" — the file has already failed.
If removing one section would only break one thing, you have the boundary right. If it would break two things, you have one file doing two jobs.
Reduce: The file is the contract between the writer and the next reader. One job means the contract is clear — and breakable in exactly one way when it drifts.
Agent Files
An agent profile has one job: state identity, trigger, and contract. That is all.
| Belongs in the profile | Belongs in a linked file |
|---|---|
| Role (one sentence) | Delegation tables |
| Trigger conditions | Method and procedure steps |
| Tool list | Reference knowledge |
| Links to SSOT files | Autonomy protocols |
| Hard boundaries | Test cases |
The agent reads context when needed. It does not carry context at startup. A profile that answers "how" instead of "what" has already failed this test.
Reduce: The profile is the label on the tin — not the recipe inside it.
Meta View
For the broader thesis on unixification of the phygital world:
- After Hierarchy — How open standards reshape coordination from hierarchy to meaning
- The Mycelium — The invisible infrastructure underneath
- The Thousand Faces — Monomyth, MEV-E, and the phygital stack as one argument
Phygital beings
A phygital being is any composite of biological humans, agents, and robots that shares a common language, protocol set, and operating standard. Today that often looks like one person + one laptop + a handful of named agents. Tomorrow it is many humans + many agents + many devices + DePIN infrastructure — all speaking the same symbolic language and protocols, drawn as a single agent and instrument diagram.
| Layer | Role |
|---|---|
| Robots | Phygital limbs — any machine with I/O (sensors, actuators, UI) that software can drive and tie to human intention |
| DePIN | Shared phygital infrastructure — many owners, coordinated by crypto + protocols, not one central operator |
| Unixification | The composability rule — small parts, stable contracts, addressable and scriptable interfaces (this page) |
| Dreamineering Meta-Language | Symbolic layer — names agents, instruments, archetypes, workflows, and how they connect |
| Intercognitive Protocol | Substrate negotiation — how human, agent, robot, and DePIN node coordinate authority and time |
| Agent & Instrument Diagrams | Visual syntax — the circuit diagram of a phygital being |
Robots are the limbs. DePIN is the nervous system at scale. Unixification + protocols + DML + diagrams are the grammar that lets humans, agents, and devices think and act together without a mess of one-off APIs. See The Thousand Faces for the economic spine (Maximum Enabler of Value vs extraction) that makes publishing this substrate rational.
Context
- Standards — Standardization as the industrial floor that compounds
- Process Optimisation — Document, measure, improve, standardize
- Performance — Benchmarks required to judge quality
- A2A Protocol — Inter-agent contract layer
- Composability — Reuse through stable interfaces
- Interoperability — Coordination across boundaries