Work Prioritisation
How do you decide what to build next — and when to stop?
Ship in fixed cycles. Test assumptions at each stage. Increase investment only as risk decreases. Fixed time, variable scope. Expect to be done.
Shaping Up
- Disagree then commit on quarterly objectives
- Two cycles per quarter's strategic mission
- Identify the hardest challenge and attack that first
- Six-week mark: tweak mission, review tactics
Stage Progression
| Stage | Time | Question | Output |
|---|---|---|---|
| Napkin | Minutes | Does this have obvious fatal flaws? | Core assumptions listed |
| Mock-up | 1-2 days | Does this solve the customer's problem? | Single use-case prototype |
| Market | 3-4 days | How big can this be? Why now? | Defensible rhetoric |
| Build | Weeks | Are there unseen fatal flaws? | Working prototype, cost estimates |
| Demand | Ongoing | Does retention exceed benchmarks? | Go-to-market strategy |
The Algorithm
Deterministic. Any agent can run node scripts/prioritise-prds.mjs --diff and get the same table order. Scores (judgment) live in PRD frontmatter. Math lives in the script.
INPUT:
prds[] <- read from /phygital-mycelium/index.md Active table
commissioning <- read from /phygital-mycelium/commissioning/index.md
FOR EACH prd IN prds:
STEP 1 - SCORE PRIORITY (what deserves attention)
Read the PRD. Score each dimension 1-5 using rubrics below.
priority = (pain * demand * edge * trend * conversion) ^ (1/5)
STEP 2 - SCORE READINESS (can we build it now)
Read commissioning status. Score each dimension 1-5 using rubrics below.
readiness = (0.20 * principles) + (0.25 * performance)
+ (0.20 * platform) + (0.15 * protocols) + (0.20 * players)
STEP 3 - CHECK HARD GATES
gate_pass = ALL of:
[] Benchmarks defined (thresholds exist before build)
[] Feature/Function/Outcome table exists (commissioning checklist)
[] Named builder != named commissioner
[] Kill date + kill criteria written (if build started)
[] Baseline measurement exists (if claiming improvement)
IF NOT gate_pass: status = HOLD
STEP 4 - COMPUTE COMBINED SCORE
combined = priority * readiness / 5
STEP 5 - CHECK DEPENDENCIES
blocked_by[] <- read PRD dependency declarations
IF any blocked_by[].status == BLOCKED: promote blocker above this PRD
STEP 6 - CHECK KILL DATE
IF kill_date exists AND kill_date - today < 30 days:
Apply time_pressure bonus = +0.5 to combined
SORT prds BY combined DESC, applying:
1. HOLD items sink to bottom of their maturity tier
2. Blockers promote above the items they block
3. Kill date items get +0.5 bonus
OUTPUT:
Sorted table -> becomes Active table order in /phygital-mycelium/index.md
This follows the Tight Five sequence for business execution:
- Principles: is value transformation and distribution explicit?
- Performance: are benchmarks and drift triggers defined?
- Platform: can we ship on existing operational base?
- Protocols: does this create reusable know-how?
- Players: is demand real and are the right people onboard?
Priority Rubrics
Priority: (Pain x Demand x Edge x Trend x Conversion) ^ (1/5)
| Score | Pain | Demand | Edge | Trend | Conversion |
|---|---|---|---|---|---|
| 5 | System blocked or user locked out | Every downstream capability depends on this | No competitor has this; patentable | Category-defining shift happening now | Paying customers exist |
| 4 | >10hrs/week wasted, validated by user | JTBD validated by interviews or usage data | Competitors exist but miss key insight | Strong tailwind, 2-3yr window | Demo-ready, prospects engaged |
| 3 | Pain acknowledged but workarounds exist | Internal hypothesis with supporting signals | Some differentiation, not unique | Growing category, crowded | Prototype exists, no revenue signal |
| 2 | Mild friction, low urgency | Assumed need, no evidence | Commodity with cosmetic differences | Stable or slow growth | Spec only, no build |
| 1 | No evidence of pain | No evidence of demand | No differentiation | Declining or unknown | No evidence anyone would use this |
Readiness Rubrics
Readiness: 0.20(Principles) + 0.25(Performance) + 0.20(Platform) + 0.15(Protocols) + 0.20(Players)
Performance is highest weight because unmeasured work drifts. Principles/Platform/Players are equal core constraints. Protocols still matter, but after value, measurement, and demand are clear.
| Score | Principles | Performance | Platform | Protocols | Players |
|---|---|---|---|---|---|
| 5 | Value loop + margin logic explicit | Kill criteria + benchmarks + review cadence all defined | Ships on existing stack with zero new infra | Creates reusable SOP adopted by 2+ capabilities | 5+ real interviews or equivalent evidence |
| 4 | Value loop clear, margin estimated | Kill criteria + benchmarks defined, review TBD | Ships with minor additions to existing stack | Creates reusable SOP, not yet adopted | 2-4 interviews or strong usage signals |
| 3 | Value proposition stated, loop unclear | Some benchmarks defined, gaps remain | Needs 1 new service or integration | Follows existing SOP, doesn't create new | Internal user validated |
| 2 | Problem stated, no value loop | Kill criteria exist, no benchmarks | Needs multiple new services | No SOP connection | Hypothesis only, no validation |
| 1 | Problem unclear | No measurement defined | Needs net-new platform | No protocol value | No evidence of demand |
Hard Gates
If any gate fails, priority is HOLD regardless of score.
| Gate | Required Evidence | Result if Missing |
|---|---|---|
| Benchmark Gate | Thresholds defined from Benchmark Standards | HOLD |
| Commissioning Gate | Feature/Function/Outcome table with pass/fail checks | HOLD |
| Owner Gate | Named builder + named commissioner (different people) | HOLD |
| Kill Gate | Explicit kill date and kill criteria | HOLD |
| Data Gate | Baseline measurement exists before change | HOLD |
Decision Bands
| Score Band | Decision | Action |
|---|---|---|
| >= 4.0 | Ship Now | Allocate current cycle |
| 3.0-3.9 | Prepare | Close missing dependencies, then rescore |
| 2.0-2.9 | Incubate | Keep in backlog, gather evidence |
| < 2.0 | Stop | Drop or rewrite problem framing |
Dependencies
Identity & Access <-- BLOCKS EVERYTHING
|
+-- Agent Platform <-- blocks all agent instances
| +-- Sales Dev Agent
| +-- Content Amplifier
| +-- [future agents]
|
+-- Sales CRM & RFP <-- blocks sales chain
| -> Sales Dev Agent -> Sales Process Opt -> Trust Commerce
|
+-- ETL Data Tool <-- blocks data access
-> Data Interface
Data flow between sales chain:
| From | Provides | To |
|---|---|---|
| CRM | contacts, deals, activities, pipeline state | Dev Agent |
| Dev Agent | scored prospects, outreach history, channel metrics | SPO |
| SPO | qualification proof, satisfaction outcomes, relationship data | Trust Commerce |
| Trust Commerce | on-chain reputation, referral conversion | -- |
Duplication Check
FOR EACH pair (prd_a, prd_b) IN prds:
overlap = count(shared components in composition inventory)
total = max(components_a, components_b)
IF overlap / total > 0.50:
CHECK: do they declare Parent/Component/Peer relationship?
CHECK: do shared components use different verbs? (own/use/compose/orchestrate)
IF no relationship declared: FLAG as hierarchy signal
IF same verb for same component: FLAG as duplication
Current Run (2026-02-24)
Sales trio overlap: 9 of 13 components shared (69%). Not duplication — three zoom levels with distinct verbs:
| Component | CRM | SDA | SPO |
|---|---|---|---|
| CRM Contacts + Deals | own | use | use |
| CRM Activities + Tasks | own | use | use |
| Sales Forecasting algo | wire | compose | orchestrate |
| Explore-Exploit algo | wire | compose | orchestrate |
| SPCL Scoring algo | wire | compose | orchestrate |
| Compound Rate Tracking | wire | compose | orchestrate |
| Email Adapter | -- | own | orchestrate |
| LinkedIn Adapter | -- | own | orchestrate |
| Document Intelligence | own | compose | orchestrate |
Flags:
| Issue | PRDs | Fix |
|---|---|---|
| Identical switch trigger (verbatim) | SDA + SPO | SPO trigger should derive from SDA output metrics, not copy SDA's input trigger |
| Ambiguous workflow ownership | Agent Platform + Content Amplifier | Content Amplifier owns marketing workflow wiring; Agent Platform owns orchestration infra |
| Unidirectional relationship declarations | Sales chain | Each PRD should declare upstream AND downstream |
Result: No duplication. Hierarchy correctly structured.
Drift Control
Run this every week for active PRDs:
- Recompute Priority Score
- Evaluate benchmark state: Pass / Warn / Fail
- Trigger action:
- Pass: continue or promote standard
- Warn: corrective loop and retest in 7 days
- Fail: freeze rollout, rollback, or kill
- Log the decision and evidence in the commissioning dashboard
Rescore Triggers
| Trigger | Action |
|---|---|
| PRD moves maturity level (L0->L1, etc.) | Rescore that PRD |
| Kill date within 14 days | Force rescore + apply +0.5 bonus |
| Blocker clears | Rescore everything it blocked |
| Weekly (active PRDs only) | Rescore + verify index order |
| Combined score drops below 0.5 | Candidate for kill |
| Kill date passes with zero north star progress | Kill |
Frontmatter Schema
Every PRD stores these fields. The script reads them; the score-prds skill writes them.
# Priority (judgment - assigned by LLM/human using calibration examples)
priority_pain: 0 # 1-5
priority_demand: 0 # 1-5
priority_edge: 0 # 1-5
priority_trend: 0 # 1-5
priority_conversion: 0 # 1-5
# Readiness (judgment - assigned by LLM/human using calibration examples)
readiness_principles: 0 # 1-5
readiness_performance: 0 # 1-5
readiness_platform: 0 # 1-5
readiness_protocols: 0 # 1-5
readiness_players: 0 # 1-5
# Metadata
kill_date: "" # YYYY-MM-DD if applicable
blocked_by: "" # comma-separated prd slugs
gate_status: "" # BLOCKED if hard-blocked
gate_reason: "" # why blocked
last_scored: "" # YYYY-MM-DD
Computed Outputs
| Output | Formula |
|---|---|
priority | (pain x demand x edge x trend x conversion) ^ (1/5) |
readiness | 0.20(principles) + 0.25(performance) + 0.20(platform) + 0.15(protocols) + 0.20(players) |
combined | priority x readiness / 5 |
kill_bonus | +0.5 if kill_date within 30 days |
dep_bonus | +0.1 per PRD that declares blocked_by this PRD |
final | combined + kill_bonus + dep_bonus |
| Gate checks | BLOCKED, NO PLAYERS (players=1), KILL DATE PASSED |
| Sort | final descending, scored before unscored |
node scripts/prioritise-prds.mjs # ranked table
node scripts/prioritise-prds.mjs --diff # compare against index order
node scripts/prioritise-prds.mjs --json # machine-readable output
Success Metrics
| Type | What to Measure |
|---|---|
| Quantitative | Retention, unit economics, CAC/LTV |
| Qualitative | How users feel if product disappeared |
| Ethical | Does it serve genuine needs? |
Stress plus rest creates growth. Judge challenges to manage oscillations for peak performance over time.
PRD with AI
A Product Requirements Document translates the job to be done into what gets built. When working with AI:
- Share the PRD and tackle one step at a time — don't overload context
- Test each feature before moving to the next
- Use TDD — let the AI test its own code
- Ask AI to explain the architecture so you understand what was built
Be less specific in some areas to allow creative solutions. Start over when things go wrong rather than patching complexity.
Context
- Phygital Mycelium — The capability catalogue (table order = build order)
- Commissioning Dashboard — What's specified, built, proven
- Commissioning Protocol — L0-L4 maturity model, gates, process
- Jobs To Be Done — Discover what to build
- Validate Demand — Prove people will pay
- Priorities — The priority algorithm (Pain x Demand x Edge x Trend x Conversion)
- Tight Five Platform — The 5P readiness framework
- Benchmark Standards — Trigger-based benchmark protocol
- Business Development — From idea to plan
- Business Instruments — Control loops at each stage
- Manufacturing — Production systems thinking
- Flow Engineering — Mapping value streams
Links
- Lenny Rachitsky — PRD Templates — The starting point most PMs use
Questions
How do you decide what to build next — and when to stop?
- When the algorithm says "Ship Now" but your gut says "Not ready" — which do you trust, and what signal would change your mind?
- Is the 5P readiness weighting right — should Performance really outweigh Principles?
- What happens when two PRDs have identical scores but competing dependencies?
- At what point does drift control become overhead that slows down the thing it's meant to protect?