Skip to main content

Pipeline Nowcast

Are we on track, drifting, or in trouble — and are our predictions proving right?

Two jobs, one algorithm. The nowcast synthesizes five operational signals into a composite variance score. The prediction evidence system instruments our thesis predictions with live data so conviction scores update from evidence, not opinion.

The Gap

76 predictions in a database. Conviction scores assigned once. Zero evidence feeds updating them. The Bayesian protocol exists on paper — no automated triggers. Meanwhile, the commissioning dashboard tracks 74 features at L0-L4 with no timestamps on state transitions, so velocity is invisible.

The nowcast closes both gaps: operational variance detection (are we on track today?) and prediction validation (are our bets on the future proving right?). Same normalization, same decay, same composite scoring. Different time horizons.

Context

Questions

What happens when the algorithm disagrees with your gut — and what happens when your predictions disagree with the data?

  • If the composite score says "on track" but one signal is critical, is the weighting wrong or is the signal unimportant?
  • At what signal coverage does confidence become actionable — and what do you do below that threshold?
  • When nowcast catches drift 3 days early, what process fires that wouldn't have fired otherwise?
  • If a prediction you scored 4/5 has zero evidence entries after 6 months, is the prediction wrong or is the collection broken?
  • What's the minimum evidence that justifies changing a conviction score — one data point or a trend?