Skip to main content

Validated Value Stories

Each story validates that an outcome delivers expected customer value — time saved per RFP, library growth, compound ROI. The flywheel only spins when the library has entries.

ActionHuman-initiatedCronScheduledHookEvent-triggered

Does the flywheel start?

Library seeded, auto-fill compounds. The cold-start problem solved.

S1Action
When

Answer library is empty and auto-fill returns nothing — the compounding promise is dead on arrival because there's no path from zero to first entry.

I need to

Import past bid Q&A pairs and approve them so the library has entries before the first auto-fill attempt.

So I get

Library seeded with 20+ approved answers in <2 hours vs never (no path existed).

Not

Library shows entries but none are approved. Auto-fill still returns nothing.

S2Action
When

New RFP lands with questions already answered in a previous bid — but the team starts from scratch because answers are trapped in old Word docs, not a queryable library.

I need to

Auto-fill new venture questions from approved library answers so each bid makes the next one faster.

So I get

Venture B response time <8 hours at 70% fill rate vs 20-40 hours from scratch.

Not

Auto-fill returns 0 matches when identical questions exist. Pulls from unapproved answers.

Does quality compound?

SME gate holds. Go/No-Go stops bad bids before wasted effort.

S3Action
When

AI generates a draft answer but it reads generic — team fears auto-filled content will embarrass them in front of the client and need so much editing it's not worth it.

I need to

SME approval gate that ensures only human-reviewed answers enter the library.

So I get

SME rejection rate <30%. Every auto-filled answer was approved by a human first.

Not

AI-generated answers auto-approved without review. Library fills with generic content.

S4Action
When

RFP deadline is 14 days out and the team is about to spend 20-40 hours on a bid they'll lose — no structured way to kill bad bids early.

I need to

Score 8 qualification criteria and get a Go/Review/No-Go recommendation before committing effort.

So I get

Stop pursuing unwinnable bids — save 20-40 hours per no-go decision.

Not

Score always computes 'Go' regardless of criteria. Form saves with fewer than 8 scored.

Does the ROI compound?

Analytics prove the value. Compound growth visible.

S5Hook
When

Three ventures completed but nobody can prove the library is compounding — fill rate, time saved, and growth are invisible so the tool looks like overhead.

I need to

Analytics dashboard showing library growth, per-venture fill rate, and compound ROI projection.

So I get

Prove compound ROI: $1,150 per RFP saved x fill rate %. Projection visible.

Not

Analytics shows 0 approved answers when library has entries. Time saved shows $0 when fill rate >0%.

Kill Signal

Ventures but no answers = project board. Library has 0 queries after 14 days = adoption failed.

Who this is for

Construction bid teams drowning in copy-paste. 20-40 hours per RFP. 70% is finding answers that already exist somewhere. The hidden fear: "AI answers will be generic." The unlock: every auto-filled answer was approved by a human first.

Questions

Does each story prove value that compounds — or just value that ships once?

  • Which story has the weakest counterfeit — where could fake success slip through?
  • If the library stays empty after seeding, which story assumption was wrong?
  • S4 (Go/No-Go) doesn't compound answers — should it be in this PRD or the CRM?