RFP Engine
Compounding answer library — answer once, auto-fill forever, win rate climbs.
Problem
RFP lands. 3 weeks to respond. Half the questions answered before in previous bids. Team copy-pastes from old Word docs. 20-40 hours per response. 70% is duplicate effort.
Answer library that compounds. Write an answer once, approve it, auto-fill it next time. Each bid makes the next one faster. The Dot Grid of bids — not a full RFP platform, just the compounding part.
Chicken-and-egg: auto-fill needs library entries. Library needs someone to write and approve answers. Nobody writes answers without seeing the auto-fill value first.
Hidden objection: 'AI-generated answers will be generic and need so much editing they're not worth it.' The unlock: SME approval gate — every auto-filled answer was approved by a human first.
Scorecard
Priority (5P)
Readiness (5R)
What Exists
| Component | State |
|---|---|
| Venture kanban | Working |
| Venture detail + Q&A | Working |
| AI answer generation | Partial |
| Auto-fill from library | Partial |
| Answer library | Broken |
| SME review queue | Partial |
| Deck generation | Partial |
| Document upload | Working |
| Go/No-Go scoring | Missing |
| RFP analytics | Broken |
Relationships
| PRD | Contributes |
|---|---|
| Sales CRM | Peer PRD. RFP depends on CRM contacts and deals. |
| Identity & Access | Auth dependency |
| Construction Industry | Domain context |
Ventures but no answers = project board. Library has 0 queries after 14 days = adoption failed.
Questions
What seeds the auto-fill flywheel — and what happens if the library stays empty?
- If the hidden objection is 'AI answers are generic,' does answer-once-auto-fill-forever actually overcome it?
- At what library size does the compounding become visible enough to change behavior?
- Should Go/No-Go scoring live here or in the CRM — where does the decision actually happen?