Skip to main content

Pipeline Nowcast Spec

How do you know the factory is on track before the weekly meeting tells you it isn't?

Build Contract

#FeatureFunctionOutcomeJobState
1Signal Collectors (5x)Extract raw measurements from each source tableAll 5 sources feeding one algorithmCollectGap
2Signal NormalizationNormalize each signal to 0-1 scale per typeApples-to-apples comparisonNormalizeGap
3Exponential DecayWeight signals by recency with configurable half-lifeRecent signals matter moreWeightGap
4Variance ComputationPer-signal (actual - forecast) / forecastKnow which signal is driftingMeasureGap
5Composite ScoringWeighted sum with configurable signal weightsSingle number: are we on track?ComposeGap
6ClassificationMap composite to on_track / warning / criticalStatus matches varianceStatus enumClassifyGap
7Confidence CalculationScore based on signal coverage and freshnessTrust the number or gather more dataCalibrateGap
8Recommendations EngineTop risk, top momentum, action listKnow what to do next, not just statusAdviseGap
9Constants RegistrationPIPELINE_NOWCAST block in algorithm-constants.tsAll thresholds tunable without codeConfigureGap
10Forecast BaselinesTarget values per signal type (config)Variance has something to compare toBaselineGap

Principles

What truths constrain the design?

The Job

ElementDetail
SituationFive signal systems built independently. Each has a dashboard. None connect.
IntentionA single composite score answering "are we on track, drifting, or in trouble?"
ObstacleNo algorithm normalizes, weights, and composes these signals with recency decay.

Why Now

All five signal sources exist in production:

  • CRM: 5 deals, 10 activities, $1.2M pipeline (commissioning 2026-03-02)
  • Agent comms: Convex event stream with typed messages
  • Commissioning: 20 features at L0-L3
  • Predictions: schemas deployed, tables empty

The data exists. The synthesis doesn't. Every commissioning session (45 min manual) could be a 2-second algorithm call.

Design Constraints

ConstraintRationale
Pure function, no side effectsTestable, composable, matches algorithm framework
All thresholds in constants fileTunable without code changes
Minimum 3 signals for confidence > 0.5Prevents false confidence from sparse data
Match existing varianceStatus enumInterop with prediction schemas
Exponential decay, not linearRecent signals should dominate, old signals fade smoothly

Refusal Spec

CategoryActionResponse
Insufficient signalsFewer than 1 signal availableReturn confidence: 0, status: critical, reasoning: "No signals"
Stale dataAll signals older than 2x half-lifeReturn confidence below 0.3, flag staleness
Invalid forecastsForecast value is 0 or negativeSkip signal, reduce confidence, log warning

Performance

How do we know it's working?

Priority Score

DimensionScoreEvidence
Pain445 min/session manual synthesis across 5 dashboards. No composite view. Drift invisible between meetings.
Demand3Internal demand (dream team commissioning). No external customer yet.
Edge4Proprietary signal combination (CRM + agent comms + commissioning state). Nobody else has this data shape.
Trend4Nowcasting is the pattern in AI operations. Real-time variance > batch forecasting.
Conversion2Internal tool first. Path to customer-facing via BOaaS later.
Composite3844 x 3 x 4 x 4 x 2. Strong candidate. Edge and trend validated, conversion needs customers.

Quality Targets

MetricTargetMethod
Execution time<500ms for 5 signalsBenchmark test
Classification accuracyMatches manual assessment 5 consecutive daysHuman comparison
Signal coverage confidenceDegrades gracefully below 3 signalsUnit test with partial inputs

Eval Strategy

WhatHowWhen
Classification accuracyCompare nowcast status vs dream team manual assessmentDaily for first 2 weeks
Signal freshnessCheck timestamp of newest signal per typeEvery run
Threshold calibrationReview false positive/negative rateWeekly for first month

Kill signal: Nowcast status disagrees with manual assessment for 5 consecutive days after 2-week calibration period. Algorithm is wrong or signals are wrong.


Platform

What do we control?

Current State

ComponentBuiltWiredWorkingNotes
Algorithm frameworkYesYesYeslibs/agency/src/lib/algorithms/
AlgorithmMetadata patternYesYesYesStandard export
algorithm-constants.tsYesYesYesExtensible
CRM data (Drizzle)YesYesYes5 deals, 10 activities
Convex agent messagesYesYesYesHTTP client working
Commissioning stateYesPartialPartialManual markdown, needs parser
Prediction schemasYesNoNoTables empty
Signal normalizationNoNoNoBuild
Exponential decayNoNoNoBuild
Composite scoringNoNoNoBuild
Insights UIYesNoNoComponents exist, no data

Build Ratio

~70% composition, ~30% new code.

Algorithm Interface

Input

interface NowcastInput {
pipeline: {
deals: Array<{ amount: number; probability: number; stage: string; closeDate: string }>;
targetCoverage: number; // default 3.0x
};
activity: {
activities: Array<{ type: string; outcome: string; startDate: string }>;
targetPerDealPerWeek: number; // default 2
dealCount: number;
};
agentVelocity: {
messages: Array<{ type: string; createdAt: number }>;
plansCompleted: number;
blockersOpen: number;
};
commissioning: {
features: Array<{ name: string; currentLevel: number; forecastLevel: number }>;
};
predictions: {
entries: Array<{ confidenceScore: number; accuracyScore: number | null; status: string }>;
};
config?: {
weights?: Partial<SignalWeights>;
decayHalfLifeDays?: number;
thresholds?: { onTrack?: number; warning?: number };
};
}

Output

interface NowcastResult {
result: {
composite: number; // 0-1
status: "on_track" | "warning" | "critical";
signals: NowcastSignal[]; // per-signal breakdown
topRisk: string; // highest-variance signal name
topMomentum: string; // most-improving signal name
};
metadata: {
algorithm: "pipeline-nowcast";
version: string;
executionTimeMs: number;
signalCount: number;
};
reasoning: string[];
confidence: number; // 0-1 based on signal coverage
recommendations: {
status: "on_track" | "warning" | "critical";
actions: string[];
signalsNeedingAttention: string[];
};
}

interface NowcastSignal {
name: string;
score: number; // 0-1 normalized
weight: number; // configured weight
variance: number; // (actual - forecast) / forecast
trend: "improving" | "stable" | "declining";
freshness: number; // 0-1 decay factor
}

Constants

export const PIPELINE_NOWCAST = {
PIPELINE_WEIGHT: 0.35,
ACTIVITY_WEIGHT: 0.25,
AGENT_VELOCITY_WEIGHT: 0.15,
COMMISSIONING_WEIGHT: 0.15,
PREDICTION_WEIGHT: 0.1,
DECAY_HALF_LIFE_DAYS: 14,
ON_TRACK_THRESHOLD: 0.7,
WARNING_THRESHOLD: 0.4,
MIN_SIGNALS_FOR_CONFIDENCE: 3,
TARGET_PIPELINE_COVERAGE: 3.0,
TARGET_ACTIVITY_PER_DEAL_WEEK: 2,
} as const;

Core Algorithm

1. For each signal type with data:
a. Normalize raw measurement to 0-1 scale
b. Apply exponential decay: factor = exp(-0.693 * ageDays / halfLife)
c. Compute variance: (actual - forecast) / forecast
d. Record trend from last 3 measurements

2. Compute confidence:
confidence = signalsPresent / totalSignals * avgFreshness

3. Compute composite:
composite = sum(signal.score * signal.weight * signal.freshness)
/ sum(signal.weight * signal.freshness)

4. Classify:
>= 0.7 → on_track
>= 0.4 → warning
< 0.4 → critical

5. Generate recommendations:
topRisk = signal with lowest score
topMomentum = signal with best trend
actions = per-signal actionable suggestions

Protocols

How does the system coordinate?

Build Order

SprintFeaturesWhatEffortAcceptance
S0#9, #10Constants + forecast baselines0.5dConstants in file, baselines documented
S1#1, #2, #3Signal collectors + normalization + decay2dEach collector returns normalized 0-1 with decay
S2#4, #5, #6, #7Variance + composite + classification + confidence2dcalculateNowcast() returns valid NowcastResult with 5 mock signals
S3#8Recommendations engine1dtopRisk, topMomentum, actions populated from signal breakdown
S4Wire to production data sources1dReal signals flowing, composite rendered in Insights

Commissioning

#FeatureInstallTestOperationalOptimize
1Signal Collectors (5x)------------
2Signal Normalization------------
3Exponential Decay------------
4Variance Computation------------
5Composite Scoring------------
6Classification------------
7Confidence Calculation------------
8Recommendations Engine------------
9Constants Registration------------
10Forecast Baselines------------

Agent-Facing Spec

Commands: pnpm test -- --filter=pipeline-nowcast, pnpm tc

Boundaries:

  • Always: pure function, no DB writes, no side effects
  • Ask first: threshold changes, weight adjustments
  • Never: modify signal source data, bypass confidence check

Test Contract:

#FeatureTest FileAssertion
1Signal collectorspipeline-nowcast.test.tsEach collector returns 0-1 from valid input
2Normalizationpipeline-nowcast.test.tsEdge cases: zero, negative, very large values
3Exponential decaypipeline-nowcast.test.ts14-day-old signal at 50% weight
4Composite scoringpipeline-nowcast.test.ts5 signals produce weighted sum
5Classificationpipeline-nowcast.test.tsBoundary: 0.7 on_track, 0.4 warning
6Confidencepipeline-nowcast.test.ts2 of 5 signals = confidence < 0.5
7Recommendationspipeline-nowcast.test.tstopRisk = lowest-scoring signal
8Cold startpipeline-nowcast.test.ts0 signals = confidence 0, status critical

Players

Who creates harmony?

Job 1: Know If We're On Track

ElementDetail
Struggling momentWeekly commissioning session: 45 min checking 5 dashboards, forming a mental picture
WorkaroundManual synthesis, gut feel, "seems fine" until it isn't
ProgressGlance at one composite score, see which signal is drifting, act on the recommendation
Hidden objection"A single number can't capture this complexity"
Switch triggerMissed a regression that was visible in the data 3 days earlier

Features that serve this job: #5, #6, #8

Job 2: Detect Drift Early

ElementDetail
Struggling momentProblem compounds silently between reviews. Activity drops, nobody notices for a week.
WorkaroundHope someone checks. Rely on agent posting to #meta.
ProgressNowcast fires warning when activity velocity drops below threshold, before the weekly review
Hidden objection"False alarms are worse than no alarms"
Switch triggerA deal went cold because nobody noticed zero activity for 10 days

Features that serve this job: #3, #4, #7


Relationship to Other PRDs

PRDRelationshipData Flow
Sales CRM & RFPPeerPipeline + activity signals flow IN to nowcast
Agent PlatformPeerAgent velocity signals flow IN from Convex
ETL Data ToolPeer (upstream)Depends on data flowing correctly
Sales Process OptimisationPeer (downstream)Nowcast output could feed SPO decision engine

Context