Pipeline Nowcast
One number. Five signals. No surprises.
1Principles
ValuesBeliefsControls
Variance is the measurement.
- Five signal systems, each working, none connected
- Variance = (actual - forecast) / forecast. No forecast = no measurement.
- Pure function. No side effects. Testable.
| Problem | Question | Decision |
|---|---|---|
| Five dashboards, one mental model | Can an algorithm synthesize faster than a human? | Build composite nowcast |
Gauge dashboard: five needles converging to one composite reading, green/amber/red zones
2Performance
ValuesBeliefsControls
45 min to 2 seconds.
| Metric | Target | Now |
|---|---|---|
| Synthesis time | <2s | 45 min |
| Classification accuracy | 5 consecutive days | N/A |
| Signal coverage | 5/5 signals | 0/5 wired |
| Confidence threshold | >0.5 for action | N/A |
| Problem | Question | Decision |
|---|---|---|
| Drift invisible between reviews | Can sub-second nowcast catch what weekly meetings miss? | Wire all 5 signals |
Split clock: left side shows 45-minute manual review, right shows 2-second algorithm output
3Platform
ValuesBeliefsControls
70% composed, 30% new.
| Layer | Status | Notes |
|---|---|---|
| Algorithm framework | Done | Proven patterns |
| CRM data access | Done | 5 deals, 10 activities |
| Agent comms (Convex) | Done | HTTP client working |
| Signal normalization | Build | 0-1 per type |
| Composite scoring | Build | Weighted sum |
| Problem | Question | Decision |
|---|---|---|
| Signals exist, synthesis doesn't | What's the minimum new code to connect them? | 30% new, 70% compose |
Architecture: algorithm framework box with five signal pipes feeding into nowcast engine
4Protocols
ValuesBeliefsControls
Constants first, code second.
| Sprint | What | Effort | Acceptance |
|---|---|---|---|
| S0 | Constants + baselines | 0.5d | Config in file |
| S1 | Collectors + normalization | 2d | 0-1 per signal |
| S2 | Composite + classification | 2d | Valid NowcastResult |
| S3 | Recommendations | 1d | topRisk populated |
| Problem | Question | Decision |
|---|---|---|
| Thresholds are opinions until tested | Which defaults let us start learning? | Ship conservative, tune with data |
Build sequence: S0 constants, S1 collectors, S2 composite, S3 recommendations, S4 wire
5Players
ValuesBeliefsControls
One score beats five dashboards.
| Job | Struggling Moment | Hidden Objection |
|---|---|---|
| Know if on track | 45 min across 5 dashboards | "A single number can't capture this" |
| Detect drift early | Problem compounds silently | "False alarms worse than none" |
| Problem | Question | Decision |
|---|---|---|
| Missed regression visible in data 3 days earlier | Would a nowcast have caught it? | Test on historical signals |
Dream team member glancing at composite gauge instead of tabbing through five separate views
1 / 5
Five Questions
- Which signal is drifting and who owns it?
- Is 45 minutes of manual synthesis worth automating?
- What confidence level justifies acting on this score?
- Which threshold will we argue about first?
- What breaks when the algorithm disagrees with gut feel?