Dev Workflow
How does the engineering team ship new features and fix existing issues without blocking either?
The Sequence
Three stages, strict order. Each stage's output feeds the next.
Stories → Maps (Flow Engineering) → Types (Type-First) → Code → Verify (Outer Loop)
| Stage | Input | Output | Page |
|---|---|---|---|
| Maps | Story Contract rows from PRD | Outcome Maps, domain contracts, success measures | Flow Engineering |
| Types | Domain contracts from maps | Type definitions that pull implementation | Type-First Development |
| Verify | Deployed code | Reality measurements against PRD spec | Outer-Loop Validation |
Story Contract rows define WHAT (user flow, outside-in). Maps and types define HOW (type flow, inside-out). The Story Contract bridges both — each row names a user's pain AND the data source and threshold that proves it's fixed.
Overview
Two parallel streams, two worktrees, one repo. New PRD features build forward. Existing issues fix backward. Neither blocks the other.
main
├── worktree: feature/{prd-name} ← new capabilities from PRD specs
└── worktree: fix/{issue-number} ← existing issues from issues log
Worktrees Workflow
Git worktrees let you check out multiple branches of the same repo simultaneously in separate directories. No stashing, no context switching, no lost work.
Setup
# From the main repo directory
git worktree add ../sm-feature feature/prd-sales-crm
git worktree add ../sm-fix fix/issue-14-deal-detail
Each worktree is a full working copy with its own branch, sharing the same .git history.
Two Streams
| Stream | Worktree | Input | Branch Pattern | Output |
|---|---|---|---|---|
| Build | ../sm-feature | PRD spec (src/pages/priorities/prd-*/spec/) | feature/{prd-name} | New capabilities at L1-L2 |
| Fix | ../sm-fix | Issues Log | fix/{issue-number}-{slug} | Issues resolved, capabilities promoted L2-L3-L4 |
Build Stream
Engineering reads the PRD spec and builds new features.
- Read PRD:
src/pages/priorities/prd-{name}/index.mdandspec/index.md - Create worktree:
git worktree add ../sm-feature feature/prd-{name} - Populate SPEC-MAP Test File column — one test per Story Contract row. Select test layer per story row using the Testing Strategy selection rule. File naming convention routes to Nx targets (
*.schema.spec.ts→test-schema,*.integration.spec.ts→test-integration) - Build against the feature table in the spec (RED → GREEN). Nx
dependsOnenforces L1 → L2 → L3 cascade — schema tests must pass before integration tests run - Update SPEC-MAP Test Status column (GREEN for passing tests)
- PR to main when feature is deployable
- Commissioner reads SPEC-MAP — verifies zero empty cells, then verifies on production → capability promoted
Fix Stream
Engineering reads the issues log and fixes bugs.
- Read issues:
src/pages/priorities/issues-log.md - Pick highest severity issue
- Create worktree:
git worktree add ../sm-fix fix/{issue-number}-{slug} - Fix, test, PR to main
- Commissioner verifies on production → issue moved to Resolved
Lifecycle
Commissioner dogfoods app
├── finds issue → issues-log.md → fix worktree → PR → verify → resolved
└── finds gap → PRD spec update → build worktree → PR → verify → promoted
Cleanup
# After PR is merged
git worktree remove ../sm-fix
git branch -d fix/issue-14-deal-detail
Agent Workflow
AI agents follow the same two-stream pattern. The agent reads either a PRD spec (build) or the issues log (fix), never both in the same session.
| Agent Mode | Reads | Produces | Commissioning |
|---|---|---|---|
project-from-prd | PRD spec + feature table | New routes, components, API endpoints | L1 → L2 |
fix-from-issues | Issues log + reproduction steps | Bug fixes, missing routes, redirects | L2 → L3 |
Dig Deeper
- Flow Engineering — Stage 1: stories become Outcome Maps, maps become domain contracts. Start here after receiving a PRD
- Type-First Development — Stage 2: domain contracts pull implementation through four layers (Domain → Infrastructure → Application → Presentation)
- Outer-Loop Validation — Stage 3: instruments read production reality. Browser tools, performance gauges, error dashboards
- AI Coding — Agent configuration for both build and fix streams
Context
- Issues Log — The fix queue
- Business Factory Requirements — The build queue (table order = build order)
- Commissioning Dashboard — Capability status
- Commissioning Protocol — L0-L4 maturity
- Testing Platform — Trophy strategy, layer selection, Nx target structure
- Vitest — Primary test runner setup and examples
Questions
What's the cost of context-switching between building and fixing in the same worktree?
- If the fix stream consistently outpaces the build stream, what does that signal about spec quality?
- When should an issue graduate from "fix" to "new PRD feature"?
- How do you prevent the fix worktree from becoming a graveyard of LOW-severity cosmetic issues?