Skip to main content

Work Prioritisation

How do you decide what to build next — and when to stop?

Ship in fixed cycles. Test assumptions at each stage. Increase investment only as risk decreases. Fixed time, variable scope. Expect to be done.

Shaping Up

  • Disagree then commit on quarterly objectives
  • Two cycles per quarter's strategic mission
  • Identify the hardest challenge and attack that first
  • Six-week mark: tweak mission, review tactics

Stage Progression

StageTimeQuestionOutput
NapkinMinutesDoes this have obvious fatal flaws?Core assumptions listed
Mock-up1-2 daysDoes this solve the customer's problem?Single use-case prototype
Market3-4 daysHow big can this be? Why now?Defensible rhetoric
BuildWeeksAre there unseen fatal flaws?Working prototype, cost estimates
DemandOngoingDoes retention exceed benchmarks?Go-to-market strategy

The Algorithm

Deterministic. Any agent can run node scripts/prioritise-prds.mjs --diff and get the same table order. Scores (judgment) live in PRD frontmatter. Math lives in the script.

INPUT:
prds[] <- read from /phygital-mycelium/index.md Active table
commissioning <- read from /phygital-mycelium/commissioning/index.md

FOR EACH prd IN prds:

STEP 1 - SCORE PRIORITY (what deserves attention)
Read the PRD. Score each dimension 1-5 using rubrics below.
priority = (pain * demand * edge * trend * conversion) ^ (1/5)

STEP 2 - SCORE READINESS (can we build it now)
Read commissioning status. Score each dimension 1-5 using rubrics below.
readiness = (0.20 * principles) + (0.25 * performance)
+ (0.20 * platform) + (0.15 * protocols) + (0.20 * players)

STEP 3 - CHECK HARD GATES
gate_pass = ALL of:
[] Benchmarks defined (thresholds exist before build)
[] Feature/Function/Outcome table exists (commissioning checklist)
[] Named builder != named commissioner
[] Kill date + kill criteria written (if build started)
[] Baseline measurement exists (if claiming improvement)
IF NOT gate_pass: status = HOLD

STEP 4 - COMPUTE COMBINED SCORE
combined = priority * readiness / 5

STEP 5 - CHECK DEPENDENCIES
blocked_by[] <- read PRD dependency declarations
IF any blocked_by[].status == BLOCKED: promote blocker above this PRD

STEP 6 - CHECK KILL DATE
IF kill_date exists AND kill_date - today < 30 days:
Apply time_pressure bonus = +0.5 to combined

SORT prds BY combined DESC, applying:
1. HOLD items sink to bottom of their maturity tier
2. Blockers promote above the items they block
3. Kill date items get +0.5 bonus

OUTPUT:
Sorted table -> becomes Active table order in /phygital-mycelium/index.md

This follows the Tight Five sequence for business execution:

  1. Principles: is value transformation and distribution explicit?
  2. Performance: are benchmarks and drift triggers defined?
  3. Platform: can we ship on existing operational base?
  4. Protocols: does this create reusable know-how?
  5. Players: is demand real and are the right people onboard?

Priority Rubrics

Priority: (Pain x Demand x Edge x Trend x Conversion) ^ (1/5)

ScorePainDemandEdgeTrendConversion
5System blocked or user locked outEvery downstream capability depends on thisNo competitor has this; patentableCategory-defining shift happening nowPaying customers exist
4>10hrs/week wasted, validated by userJTBD validated by interviews or usage dataCompetitors exist but miss key insightStrong tailwind, 2-3yr windowDemo-ready, prospects engaged
3Pain acknowledged but workarounds existInternal hypothesis with supporting signalsSome differentiation, not uniqueGrowing category, crowdedPrototype exists, no revenue signal
2Mild friction, low urgencyAssumed need, no evidenceCommodity with cosmetic differencesStable or slow growthSpec only, no build
1No evidence of painNo evidence of demandNo differentiationDeclining or unknownNo evidence anyone would use this

Readiness Rubrics

Readiness: 0.20(Principles) + 0.25(Performance) + 0.20(Platform) + 0.15(Protocols) + 0.20(Players)

Performance is highest weight because unmeasured work drifts. Principles/Platform/Players are equal core constraints. Protocols still matter, but after value, measurement, and demand are clear.

ScorePrinciplesPerformancePlatformProtocolsPlayers
5Value loop + margin logic explicitKill criteria + benchmarks + review cadence all definedShips on existing stack with zero new infraCreates reusable SOP adopted by 2+ capabilities5+ real interviews or equivalent evidence
4Value loop clear, margin estimatedKill criteria + benchmarks defined, review TBDShips with minor additions to existing stackCreates reusable SOP, not yet adopted2-4 interviews or strong usage signals
3Value proposition stated, loop unclearSome benchmarks defined, gaps remainNeeds 1 new service or integrationFollows existing SOP, doesn't create newInternal user validated
2Problem stated, no value loopKill criteria exist, no benchmarksNeeds multiple new servicesNo SOP connectionHypothesis only, no validation
1Problem unclearNo measurement definedNeeds net-new platformNo protocol valueNo evidence of demand

Hard Gates

If any gate fails, priority is HOLD regardless of score.

GateRequired EvidenceResult if Missing
Benchmark GateThresholds defined from Benchmark StandardsHOLD
Commissioning GateFeature/Function/Outcome table with pass/fail checksHOLD
Owner GateNamed builder + named commissioner (different people)HOLD
Kill GateExplicit kill date and kill criteriaHOLD
Data GateBaseline measurement exists before changeHOLD

Decision Bands

Score BandDecisionAction
>= 4.0Ship NowAllocate current cycle
3.0-3.9PrepareClose missing dependencies, then rescore
2.0-2.9IncubateKeep in backlog, gather evidence
< 2.0StopDrop or rewrite problem framing

Dependencies

Identity & Access <-- BLOCKS EVERYTHING
|
+-- Agent Platform <-- blocks all agent instances
| +-- Sales Dev Agent
| +-- Content Amplifier
| +-- [future agents]
|
+-- Sales CRM & RFP <-- blocks sales chain
| -> Sales Dev Agent -> Sales Process Opt -> Trust Commerce
|
+-- ETL Data Tool <-- blocks data access
-> Data Interface

Data flow between sales chain:

FromProvidesTo
CRMcontacts, deals, activities, pipeline stateDev Agent
Dev Agentscored prospects, outreach history, channel metricsSPO
SPOqualification proof, satisfaction outcomes, relationship dataTrust Commerce
Trust Commerceon-chain reputation, referral conversion--

Duplication Check

FOR EACH pair (prd_a, prd_b) IN prds:
overlap = count(shared components in composition inventory)
total = max(components_a, components_b)
IF overlap / total > 0.50:
CHECK: do they declare Parent/Component/Peer relationship?
CHECK: do shared components use different verbs? (own/use/compose/orchestrate)
IF no relationship declared: FLAG as hierarchy signal
IF same verb for same component: FLAG as duplication

Current Run (2026-02-24)

Sales trio overlap: 9 of 13 components shared (69%). Not duplication — three zoom levels with distinct verbs:

ComponentCRMSDASPO
CRM Contacts + Dealsownuseuse
CRM Activities + Tasksownuseuse
Sales Forecasting algowirecomposeorchestrate
Explore-Exploit algowirecomposeorchestrate
SPCL Scoring algowirecomposeorchestrate
Compound Rate Trackingwirecomposeorchestrate
Email Adapter--ownorchestrate
LinkedIn Adapter--ownorchestrate
Document Intelligenceowncomposeorchestrate

Flags:

IssuePRDsFix
Identical switch trigger (verbatim)SDA + SPOSPO trigger should derive from SDA output metrics, not copy SDA's input trigger
Ambiguous workflow ownershipAgent Platform + Content AmplifierContent Amplifier owns marketing workflow wiring; Agent Platform owns orchestration infra
Unidirectional relationship declarationsSales chainEach PRD should declare upstream AND downstream

Result: No duplication. Hierarchy correctly structured.

Drift Control

Run this every week for active PRDs:

  1. Recompute Priority Score
  2. Evaluate benchmark state: Pass / Warn / Fail
  3. Trigger action:
    • Pass: continue or promote standard
    • Warn: corrective loop and retest in 7 days
    • Fail: freeze rollout, rollback, or kill
  4. Log the decision and evidence in the commissioning dashboard

Rescore Triggers

TriggerAction
PRD moves maturity level (L0->L1, etc.)Rescore that PRD
Kill date within 14 daysForce rescore + apply +0.5 bonus
Blocker clearsRescore everything it blocked
Weekly (active PRDs only)Rescore + verify index order
Combined score drops below 0.5Candidate for kill
Kill date passes with zero north star progressKill

Frontmatter Schema

Every PRD stores these fields. The script reads them; the score-prds skill writes them.

# Priority (judgment - assigned by LLM/human using calibration examples)
priority_pain: 0 # 1-5
priority_demand: 0 # 1-5
priority_edge: 0 # 1-5
priority_trend: 0 # 1-5
priority_conversion: 0 # 1-5

# Readiness (judgment - assigned by LLM/human using calibration examples)
readiness_principles: 0 # 1-5
readiness_performance: 0 # 1-5
readiness_platform: 0 # 1-5
readiness_protocols: 0 # 1-5
readiness_players: 0 # 1-5

# Metadata
kill_date: "" # YYYY-MM-DD if applicable
blocked_by: "" # comma-separated prd slugs
gate_status: "" # BLOCKED if hard-blocked
gate_reason: "" # why blocked
last_scored: "" # YYYY-MM-DD

Computed Outputs

OutputFormula
priority(pain x demand x edge x trend x conversion) ^ (1/5)
readiness0.20(principles) + 0.25(performance) + 0.20(platform) + 0.15(protocols) + 0.20(players)
combinedpriority x readiness / 5
kill_bonus+0.5 if kill_date within 30 days
dep_bonus+0.1 per PRD that declares blocked_by this PRD
finalcombined + kill_bonus + dep_bonus
Gate checksBLOCKED, NO PLAYERS (players=1), KILL DATE PASSED
Sortfinal descending, scored before unscored
node scripts/prioritise-prds.mjs        # ranked table
node scripts/prioritise-prds.mjs --diff # compare against index order
node scripts/prioritise-prds.mjs --json # machine-readable output

Success Metrics

TypeWhat to Measure
QuantitativeRetention, unit economics, CAC/LTV
QualitativeHow users feel if product disappeared
EthicalDoes it serve genuine needs?

Stress plus rest creates growth. Judge challenges to manage oscillations for peak performance over time.

PRD with AI

A Product Requirements Document translates the job to be done into what gets built. When working with AI:

  1. Share the PRD and tackle one step at a time — don't overload context
  2. Test each feature before moving to the next
  3. Use TDD — let the AI test its own code
  4. Ask AI to explain the architecture so you understand what was built

Be less specific in some areas to allow creative solutions. Start over when things go wrong rather than patching complexity.

Context

Questions

How do you decide what to build next — and when to stop?

  • When the algorithm says "Ship Now" but your gut says "Not ready" — which do you trust, and what signal would change your mind?
  • Is the 5P readiness weighting right — should Performance really outweigh Principles?
  • What happens when two PRDs have identical scores but competing dependencies?
  • At what point does drift control become overhead that slows down the thing it's meant to protect?