Nav Continuity Layer: Value Stream Map
VALUE STREAM MAP
════════════════════════════════════════════════════════════
[SESSION START]
│
▼
┌───────────────────────┐
│ 1. Start AI Agent │ C/T: 10s Wait: 0s %C&A: 100%
└────┬──────────────────┘
│ ░░░░░░░ (wait for human to gather context)
▼
┌───────────────────────┐
│ 2. Find/Read Docs │ C/T: 120s Wait: 30s %C&A: 80%
└────┬──────────────────┘
│ ░░░░░░░░░░░░░░░░ (bottleneck: human synthesis)
▼
┌───────────────────────┐
│ 3. Write/Paste Brief │ C/T: 60s Wait: 60s %C&A: 60%
└────┬──────────────────┘
│
▼
[PRODUCTIVE MESSAGE 1]
Flow Efficiency = Cycle Time (190s) / Lead Time (280s) x 100 = ~67%
*Note: The actual wait/context gathering time varies wildly, dropping efficiency significantly when context is lost.*
════════════════════════════════════════════════════════════
Metrics
- Cycle Time: 3-5 mins currently for human to gather context. Target: under 5s for script compilation.
- Wait Time: Human context switching. Target: 0s.
- Lead Time: Time from terminal open to first productive action. Target: under 30s.
Waste Categorization
- Motion: Context switching between repos and issue logs to find current state.
- Over-processing: Re-writing context for the agent every single session.
- Rework: Agent hallucinating due to missed context leading to corrective loops.
Questions
- Which single bottleneck, if removed first, creates the biggest lead-time drop?
- What metric drift would signal this map is no longer accurate?