Skip to main content

Unixification

What happens when you scale complexity before you standardize interfaces?

Principle

Unixification is the operating discipline of designing small, testable units with stable contracts.

RuleEngineering Meaning
One jobEach unit has one clear responsibility
Small surfaceInputs and outputs are explicit and minimal
Stable contractInterfaces change slowly and deliberately
Composable partsUnits can be assembled into larger systems
Test in isolationFailures are attributable and diagnosable

Operating Use

Apply this sequence when variance and coupling increase:

Define Unit -> Freeze Contract -> Test Isolated -> Compose Incrementally -> Measure Variance

Decision Gate

ConditionAction
High couplingBreak system into explicit units before integration
High failure ambiguityAdd contract tests and trace logging
Repeated integration reworkStabilize interfaces before adding features

Benchmarks

BenchmarkSignal
Contract breakage rateShould decline each release
Mean time to root causeShould decline over time
Integration reworkShould decline over time
Stable interface reuseShould increase over time

If these do not improve, unixification is being claimed but not practiced.

Questions

What is the smallest unit in your domain that could have a stable contract — and what would that unlock?

  • Where does failure ambiguity slow you down most, and which interface is the source?
  • What would you need to freeze now so innovation could move one layer up?
  • If your system cannot describe its capabilities, who bears the cost of that ambiguity?

Files

Every file has one job. That job is legible from the file's name and first line — no further context required.

A file with one job has:

  • One reader (who reaches for it and why)
  • One purpose (what it tells that reader)
  • One update trigger (what event makes it go stale)

When any of these three splits, the file splits.

Decision Table

SymptomDiagnosisFix
Two different readers open it for different reasonsMixed purposeSplit by reader
Some sections update on event A, others on event BMixed cadenceSplit by update trigger
You need to edit the file to understand the fileInstructions mixed with dataMove instructions to an agent config file
Historical entries bury current stateLog mixed with dashboardSeparate live state from append-only history
Removing any one section would break two different workflowsCouplingEach workflow gets its own file

The Three File Types

Most operational files are one of three types. Mixing them is the most common source of confusion.

TypeOne jobUpdate triggerAnti-pattern
DashboardShow current stateEvery time state changesAdding historical entries that bury live state
LogRecord what happenedAppend-only — never edit past entriesEditing log entries to "fix" the record
InstructionsTell an agent how to operateWhen the process changesMixing instructions with the data they describe

The Test

Read the file's first line. If it does not answer "what is the one job of this file?" — the file has already failed.

If removing one section would only break one thing, you have the boundary right. If it would break two things, you have one file doing two jobs.

Reduce: The file is the contract between the writer and the next reader. One job means the contract is clear — and breakable in exactly one way when it drifts.

Agent Files

An agent profile has one job: state identity, trigger, and contract. That is all.

Belongs in the profileBelongs in a linked file
Role (one sentence)Delegation tables
Trigger conditionsMethod and procedure steps
Tool listReference knowledge
Links to SSOT filesAutonomy protocols
Hard boundariesTest cases

The agent reads context when needed. It does not carry context at startup. A profile that answers "how" instead of "what" has already failed this test.

Reduce: The profile is the label on the tin — not the recipe inside it.

Meta View

For the broader thesis on unixification of the phygital world:

Phygital beings

A phygital being is any composite of biological humans, agents, and robots that shares a common language, protocol set, and operating standard. Today that often looks like one person + one laptop + a handful of named agents. Tomorrow it is many humans + many agents + many devices + DePIN infrastructure — all speaking the same symbolic language and protocols, drawn as a single agent and instrument diagram.

LayerRole
RobotsPhygital limbs — any machine with I/O (sensors, actuators, UI) that software can drive and tie to human intention
DePINShared phygital infrastructure — many owners, coordinated by crypto + protocols, not one central operator
UnixificationThe composability rule — small parts, stable contracts, addressable and scriptable interfaces (this page)
Dreamineering Meta-LanguageSymbolic layer — names agents, instruments, archetypes, workflows, and how they connect
Intercognitive ProtocolSubstrate negotiation — how human, agent, robot, and DePIN node coordinate authority and time
Agent & Instrument DiagramsVisual syntax — the circuit diagram of a phygital being

Robots are the limbs. DePIN is the nervous system at scale. Unixification + protocols + DML + diagrams are the grammar that lets humans, agents, and devices think and act together without a mess of one-off APIs. See The Thousand Faces for the economic spine (Maximum Enabler of Value vs extraction) that makes publishing this substrate rational.

Context