Skip to main content

Flow Engineering

How do you turn a picture into a product?

OUTCOME → VALUE STREAM → DEPENDENCIES → CAPABILITIES → A&ID
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Contracts Processes Sequencing Readiness Orchestration

The same way factories get built — draw it first. P&IDs became steel and concrete. Flow maps become working systems. The drawing IS the engineering.

The Maps

Five maps. Five questions. In sequence. Each produces inputs for the next.

MapQuestionProduces
Outcome MapWhat does success look like?Domain contracts, success measures
Value Stream MapWhere's the waste?Use cases, repositories, adapters
Dependency MapWhat must happen first?Composition, task ordering
Capability MapWhat can we do?Generators, skills, work charts
A&IDHow do agents orchestrate?Agent configs, feedback loops
4 Key Maps = WHAT to build
A&ID = HOW agents work together to build it

The Capstone

The Agent & Instrument Diagram extends P&ID discipline to AI and Crypto systems.

ElementRoleDomain
Agents (AG-XXX)Actors that take actionClaude, humans, DePIN
Instruments (QC/VC/FC)Sensors that measureSmart contracts, oracles
Feedback LoopsData improving agentsVVFL, tokenomics, governance

Products Loop

Flow engineering connects to every dimension of product development:

DimensionConnectionHow Maps Help
Jobs To Be DoneOutcome Map IS a job analysis"What does success look like?" = "What job are we hired for?"
AI ProductsA&ID IS agent orchestrationDefine evals, build loops, measure distributions
Product DesignValue Stream maps the design auditRendering, visual, responsive, interaction — in sequence
SoftwareCapability Map reveals build vs buyCore capabilities build, generic capabilities buy

The Outcome Map starts where JTBD starts — what progress is the customer trying to make? The A&ID ends where AI Products begins — how do agents deliver outcomes in a feedback loop?

From PRDs: Every Feature/Function/Outcome row in a PRD story table IS an Outcome Map input. The story defines the job; the Outcome Map turns it into domain contracts and success measures. Stories → maps → types → tests → code → commission.

Maps to Execution

Maps don't produce documentation. They produce the inputs for plan templates and generators.

MapPlan PhaseGenerator InputWhat It Produces
Outcome MapExploreDomain contractsPorts, DTOs, entities, acceptance criteria
Value Stream MapDefine TypesSchema definitionsRepository interfaces, test expectations
Dependency MapWrite Test SpecsOrdering constraintsFailing tests that define "done"
Capability MapBuildGenerator selectionScaffolded code in correct layer order
A&IDOrchestrateAgent configsPlan templates, feedback loops
Map the flow → Encode as types → Generate test specs → Scaffold implementation → Validate outcomes
│ │ │ │ │
▼ ▼ ▼ ▼ ▼
Exploration Contracts that Failing tests Generators enforce Did outcomes match
produces the compiler define what correct layer what exploration
contracts enforces "done" means order automatically predicted?

Each map iteration improves the generators. The Capability Map tracks which patterns are codified (generator exists) versus manual (hand-coded). When a manual pattern appears twice, it becomes a generator. When a generator exists, using it is mandatory. The map IS the generator improvement tracker.

Plan templates compose from multiple sources — a single feature plan might derive tasks from entity commissioning, UI component, and e2e testing templates simultaneously. Each template contributes its gates (TDD enforcement, CDD file limits, security triads, proof commands). The plan inherits all gates from all templates. Cross-team task routing (meta, intelligence, UI, platform engineering) emerges from which templates contribute tasks.

Failure Anatomy

plan-cli.ts accepted phase data from stdin. Missing phaseSlug. PostgreSQL caught it as 23502: NOT NULL violation. Debugging took 10+ minutes.

MAP         Boundary validation designed?         NO — skipped
|
v
TYPE Types used at boundary? NO — `as` cast
|
v
TEST Test for invalid input? NO — none existed
|
v
IMPLEMENT Code trusts stdin blindly? YES
|
v
ERROR Where did it surface? PostgreSQL (most expensive)

Three stages skipped. The error fell through to the most expensive layer — and the one where agents have the least signal to self-correct.

// Before: trusts stdin
const phases = JSON.parse(stdin) as Record<string, unknown>[];
await db.insert(planningPhase).values(phases.map((p) => ({ ...p, planId })));

// After: validates at boundary
const phases = phasesInputSchema.parse(JSON.parse(stdin));

This fixes one function. The structural fix prevents the class:

LevelFixMechanismScope
InstanceAdd Zod to plan-cli.tsImport schema, call .parse()This function
Ruleboundary-validation.md in .claude/rules/Auto-loaded every sessionEvery developer, every session
Hookpost-edit detecting as casts on JSON.parseFires on every .ts editEvery edit, zero effort

Enforcement Hierarchy

Three levels of response to any failure:

LevelResponseScope
InstanceFix the bugOne function, one file
ClassPrevent the categoryEvery file handling external data, every session
StructureEngineer it awayEvery edit, zero effort, zero memory required

Most teams stop at level 1. The question after every incident: at what level did we fix it?

TierMechanismEffortFailure Mode
GeneratorCode IS correct by constructionNoneCannot produce wrong pattern
TemplatePhase ordering prevents skippingFollow the templateSkip a phase
HookAuto-fires on editNoneDeveloper ignores warning
RuleAuto-loaded contextRead and followDeveloper skims
SkillOn-trigger procedureInvoke the skillDeveloper forgets to invoke
ExpertiseDeveloper memoryRemember and applyDeveloper forgets

Push enforcement UP. A hook detecting JSON.parse(x) as at edit time prevents the entire class. A memory of "validate stdin" prevents one instance, if you remember.

Template Health

Templates sit at Tier 2 — they frame plan creation but don't finish plans. A healthy template reduces decisions. A broken template produces the same gap type repeatedly.

Three health signals:

SignalHealthyBroken
Completion ratePlans using this template reach end-jtbd-validation without reworkPlans stall or get restructured mid-build
Retrospective recurrenceFindings addressed — same gap type does not appear in next planSame gap type appears 3+ times across plans using this template
Generator coverageMechanical tasks in the template have generators; plan slots reserved for decisionsTemplate contains manual steps that a generator could produce

The feedback loop:

Plan runs → Retrospective → INSIGHTS file → Template updated → Next plan benefits

Each template accumulates an INSIGHTS file from retrospectives. When a finding maps to a template phase (wrong ordering, missing gate, unclear framing), the template is updated — not just the INSIGHTS file. The INSIGHTS file is the evidence trail; the template is the actuator.

Kill signal: If a template produces the same gap type in 3+ consecutive plans, the template is broken. Fix or retire. A template that frames poorly is worse than no template — it gives false confidence that the plan is structured when it isn't.

Framing principle: Templates provide framing, not finished plans. A template that tries to finish the plan produces cargo-cult phases where the agent fills slots mechanically instead of reasoning about the work. The template's job is to ensure the right questions get asked in the right order — not to answer them.

Generator Output Gate

The Generator tier claims "code IS correct by construction." That claim requires proof. Generated code must pass the same gates as hand-written code — lint, type checks, pre-commit hooks. If generated code fails a hook, the generator template is broken. Fix the template, not the hook.

Five failure classes from production retros:

ClassSymptomTemplate Fix
Missing lint suppressionPre-commit rejects generated fileAdd eslint-disable header to template
Aggregation type castas cast on group-by resultEmit typed query result from generator
Non-FK UUID test valueString where UUID expected in testHandle standalone UUIDs in test scaffolding
Barrel re-export styleexport * rejected by lintUse named re-exports in template
JSDoc runtime codeconsole.log in JSDoc exampleStrip executable patterns from template docs

Each failure looks like a code bug. Each is a template bug. The instance fix (edit the generated file) recurs on every scaffold. The class fix (edit the template) prevents recurrence permanently.

Validation rule: Run the generated output through the pre-commit pipeline before declaring the generator working. If it fails, the generator is at Process Maturity D (draft) regardless of how correct the runtime behavior appears.

INCIDENT
|
v
Fix the instance (necessary, not sufficient)
|
v
What CLASS of error? (specific → general)
|
v
Prevent the class (rule: advisory)
|
v
Can this be STRUCTURAL? (advisory → enforcement)
|
v
Engineer the structure (hook/generator: automatic)
|
v
CANNOT RECUR

Each bug can only happen once — because the structure that allowed it is replaced by a structure that prevents it.

Cost of Quality

The enforcement hierarchy describes six tiers. Cost tracking measures whether they work.

Every incident produces a cost annotation:

FieldWhat It Records
Where caughtWhich tier actually caught it (generator / template / hook / rule / skill / expertise)
Where it should have been caughtWhich tier SHOULD have caught it
Time to resolveClock time from detection to fix merged
LayerTypeScript / Zod / PostgreSQL / Production

Three metrics compound from these annotations:

MetricWhat It MeasuresSignal
Catch rate by tier% of incidents caught at each enforcement levelHooks catching most = healthy. Expertise catching most = fragile.
Escalation rate% of incidents that fell past their intended tierRising = enforcement gaps. Falling = tiers are wired correctly.
Cost per missTime-to-resolve when an incident escapes its tierValidates the 10x multiplier from cost escalation

The connection to the hierarchy:

TierWhat To TrackHealthy State
GeneratorIncidents in generated codeZero — if a generator produces bugs, fix the generator
TemplatePhases skipped or reorderedZero — template gates should prevent this
HookHook fire count vs violations shippedHigh fire count, zero violations in commit
RuleIncidents in rule-covered areasLow — rules without hooks are suggestions under load
SkillIncidents in skill-covered areas where skill wasn't invokedDecreasing — skill invocation should become habit
ExpertiseIncidents with no structural preventionDecreasing — every expertise-caught incident should produce a hook or generator

The cost tracking loop: incident → annotate → identify tier gap → push enforcement up → measure whether that class recurs.

Retrospective Protocol

Cost of quality measures whether enforcement works. This section bridges measurement to structural prevention — turning engineering lessons into artifacts that prevent recurrence.

Five Gap Types

Every engineering failure maps to one of five gap types. Each type has a broken VVFL station and a target enforcement tier.

Gap TypeSymptomBroken StationTarget TierStructural Fix
Gate bypassRequired fields empty, bookends skippedStandards (gauge reads zero)TemplateCLI validates required fields before plan creation
Template bloatMechanical tasks consume plan slotsAttention (wasted on boilerplate)GeneratorGenerator produces boilerplate, plan tracks decisions
Sequence violationE2E before UI, retrofitted testidsSystems (order dependency ignored)GeneratorGenerator pre-populates testids at scaffold time
Interface driftpgEnum 18, TS union 21, seed 16Standards (single source violated)GeneratorSingle-source type definition generates all variants
Demand absenceWork started without Tight Five or prdRefPriorities (no demand validation)RuleprdRef required at plan creation

Retrospective Template

Every retrospective produces four sections. No variation.

SectionWhat It Contains
What happenedExpected vs actual, with file paths and evidence
Which gap typeOne of five (gate bypass, template bloat, sequence violation, interface drift, demand absence)
Enforcement responseInstance fix (this bug) + class prevention (which tier absorbs it)
Artifacts updatedExact file paths: template, hook, generator, or rule that changed

Routing Logic

Finding TypeTarget ArtifactOwner
Missing fieldPlan template (template.json)Template maintainer
Boilerplate taskGeneratorPlatform engineer
Wrong orderingGenerator + template phase gatesPlatform engineer
Type mismatchSingle-source type definitionSchema owner
No demand signal.claude/rules/ + plan-cli guardRule maintainer

Single Source Rule

Every enumerable set has ONE definition. Two files defining the same set = guaranteed drift.

  • Enum in schema → generated into TypeScript union, seed file, and validation
  • If you find two files defining the same set, that IS the bug — fix the duplication before fixing the symptom

VVFL Connection

The five gap types map to broken stations in the VVFL: Standards (gauge), Attention (focus), Systems (sequence), Priorities (demand). The retrospective template maps to the Reflect station in the 9-station model — the controller that converts measurement into structural change. Cost of quality (above) is the sensor. This protocol is the actuator.

Two Dimensions

Every map has two layers:

LayerWhat It Captures
DreamFuture state — what we're building
EngineeringCurrent state — what exists
GapWhat we must build to close the distance

Fill maps with REALITY (evidence, not hopes). Keep them FRESH (stale maps are worse than no maps).

PLANS ARE WORTHLESS, PLANNING IS ESSENTIAL.
GOOD PLANNING ALWAYS STARTS WITH MAPPING REALITY.

Picture the dream. Map reality. Close the gap.

Context

Questions

If the enforcement hierarchy claims generators produce correct code by construction, what validates that claim — and who notices when it stops being true?

  • When generated code passes runtime tests but fails pre-commit hooks, which enforcement tier actually caught the bug?
  • What is the cost multiplier when a template bug ships to 73 repositories before anyone runs lint-staged on the output?
  • If the Generator Output Gate had existed before the Data Foundation plan, which of the six failure classes would it have prevented?