Skip to main content

Outcome Map

Desired Outcome

Composite nowcast running on production data. Status classification (on_track / warning / critical) matches reality within one business day. Drift detected before a human notices.

Contributing Factors

FactorRequired ForOwner
CRM pipeline data populatedPipeline signalEngineering (done)
Activity logging activeActivity velocity signalEngineering (done)
Agent comms flowingAgent velocity signalEngineering (done)
Commissioning L-levels trackedCommissioning signalDream team (done)
Forecast baselines setVariance computationDream team (gap)
Prediction schemas populatedCalibration signalBoth (gap)

Obstacles

ObstacleTypeImpact
No forecast baselines existData gapCannot compute variance without a target
Sparse signals early (cold start)AlgorithmConfidence score unreliable below 3 signals
Signal scales differ (dollars vs counts vs levels)NormalizationApples-to-oranges without 0-1 normalization

Investigations

#QuestionMethodOwner
1What pipeline coverage ratio signals healthy?Industry benchmarks + internal targetsDream team
2What activity velocity per deal is sustainable?Historical data from CRM seedEngineering
3How fast do signals decay in relevance?Test 7, 14, 28-day half-livesEngineering

Success Measures

MeasureBinary Test
Composite score producedcalculateNowcast() returns valid NowcastResult
Classification accurateStatus matches manual assessment for 5 consecutive days
Drift detected earlyWarning fires before weekly review catches the same issue
Confidence meaningfulScore below 0.5 when fewer than 3 signals present

Questions

Which outcome in the pipeline nowcast map is the leading indicator — the one that reliably predicts all others?

  • Which outcome is hardest to measure but most important to track — and what proxy metric comes closest?
  • If the primary outcome were achieved but secondary outcomes lagged, what would that signal about the system?
  • At what threshold does each outcome shift from "in progress" to "proven" — and who decides that threshold?