How do agents orchestrate — and what instruments verify the work?
AGENT & INSTRUMENT DIAGRAM: SALES CRM & RFP
════════════════════════════════════════════════════════════
AGENTS (Yang — who applies force)
─────────────────────────────────────────────────────────
Sales Team (Human) ····· CRM usage, deal progression, RFP decisions
Bid Manager (Human) ···· Go/No-Go, answer review, submission
RFP Agent (AI) ·········· Auto-fill, answer generation, type detection
Sales Dev Agent (AI) ···· Upstream demand gen → deals enter pipeline
INSTRUMENTS (Yin — what verifies and rewards)
─────────────────────────────────────────────────────────
Auto-Fill Rate ·········· Answer library health (target: 70%+)
Confidence Scorer ······· Answer quality (tiers: 40-80%)
Compound Rate ··········· Library growth velocity ("+X%/week")
Go/No-Go Checklist ····· Bid qualification gate (70%+ = Go)
Win Rate Tracker ······· The north star (target: >30%)
Pipeline Dashboard ····· Deal progression health
PROTOCOLS (data + value + decision flows)
───────────────────────────── ────────────────────────────
┌──────────────────────────────────────────────┐
│ RFP WORKFLOW │
│ │
│ RECEIVE ──→ GO/NO-GO ──→ BID PREP ──→ │
│ │ │ │ │
│ RFP Type Go/No-Go Auto-Fill │
│ Detection Checklist Rate │
│ │ │ │ │
│ ──→ SME REVIEW ──→ SUBMIT ──→ │
│ │ │ │
│ Confidence Win Rate │
│ Scorer Tracker │
│ │ │ │
│ └── FEEDBACK ┘ │
│ │ │
│ Approved answers → Library │
│ Library → next Auto-Fill │
└──────────────────────────────────────────────┘
FEEDBACK LOOPS
─────────────────────────────────────────────────────────
Loop 1: ANSWER COMPOUND (per RFP)
┌─ Answer question → SME approves → Library grows → next RFP auto-fills ─┐
└────────────────────────────────────────────────────────────────────────┘
Loop 2: QUALITY (per answer)
┌─ AI generates → Confidence < 40%? → Flag for SME → Review → Approve ──┐
└────────────────────────────────────────────────────────────────────────┘
Loop 3: BID QUALIFICATION (per RFP)
┌─ RFP arrives → Go/No-Go score → <50%? → No-Go (save 20-40h) ─────────┐
└────────────────────────────────────────────────────────────────────────┘
Loop 4: WIN RATE (monthly)
┌─ Win/loss analysis → Why lost? → Update Go/No-Go weights + library ───┐
└────────────────────────────────────────────────────────────────────────┘
HANDOFF: SALES DEV → CRM → RFP
─────────────────────────────────────────────────────────
Sales Dev Agent CRM Pipeline RFP Workflow
────────────── ──────────── ────────────
Research → Score → Lead → Qualified → Discovery →
Compose → Sequence → [deal Proposal → Qualification →
Discovery call → created] Negotiation → Preparation →
Won / Lost Review → Submit
UPSTREAM: Sales Dev Agent generates demand → deals enter CRM pipeline
MIDSTREAM: CRM tracks deals → RFP triggered at Proposal stage
DOWNSTREAM: RFP answers compound → improves future bid quality → win rate
════════════════════════════════════════════════════════════
Agent Roster
| Agent | Type | Force Applied | Verification |
|---|---|---|---|
| Sales Team | Human | CRM data entry, deal progression, relationship building, follow-ups | Pipeline dashboard, activity timeline |
| Bid Manager | Human | Go/No-Go decisions, answer quality review, submission approval | Go/No-Go checklist, confidence scoring |
| RFP Agent | AI | Auto-fill from library, answer generation, document classification | Auto-fill rate, confidence scorer, compound rate |
| Sales Dev Agent | AI | Upstream demand gen (separate PRD) — feeds discovery calls into CRM | Reply rate, scored pipeline (tracked in its own PRD) |
Instrument Registry
| Instrument | Measures | Threshold | Action on Fail | Outcome Map Link |
|---|---|---|---|---|
| Auto-Fill Rate | % of answers auto-populated from library | Target: 70%+ (improves with library size) | Process more RFPs to grow library | Enables O1 (win rate) via speed |
| Confidence Scorer | Answer quality tier | 80%+ = auto-approve, 40-80% = SME review, <40% = flag | Route to Review Queue for human judgment | Enables O4 (10+ approved answers) |
| Compound Rate | Library growth velocity | Positive week-over-week growth | If flat: not enough RFPs being processed, or answers not being approved | Enables O1 (win rate) via compounding |
| Go/No-Go Checklist | Bid qualification score | 70%+ = Go, 50-70% = Review, <50% = No-Go | Save 20-40h of bid prep on unwinnable RFPs | Enables O1 (win rate) by filtering |
| Win Rate Tracker | Bids won / bids submitted | Target: >30% (current: 22%) | If declining: review Go/No-Go weights, answer quality, bid strategy | North star — O1 directly |
| Pipeline Dashboard | Deal count, value, stage distribution | Active deals >0, pipeline value >$100K | If empty: CRM is a rolodex, not a sales tool | Measures deal creation activation |
Feedback Loop Quality
| Loop | Frequency | Sensor | Actuator | Latency |
|---|---|---|---|---|
| Answer Compound | Per RFP | Auto-fill rate change | Library growth, next RFP faster | Days |
| Quality | Per answer | Confidence score | Route to SME review or auto-approve | Minutes |
| Bid Qualification | Per RFP | Go/No-Go score | Commit resources or decline | 30 minutes |
| Win Rate | Monthly | Win/loss ratio | Update Go/No-Go weights, library strategy | 30 days |
The Compound Flywheel
From the Value Stream Map — the core insight is that each RFP answered makes the next one faster:
RFP 1: 40h prep (no library) → 10 answers approved
RFP 5: 20h prep (50% auto-fill) → 50 answers in library
RFP 10: 12h prep (70% auto-fill) → 100+ answers
RFP 50: 4h prep (90% auto-fill) → library IS the moat
Competitor catch-up cost: process 50 RFPs. Time: 12+ months.
This is the compound rate applied to knowledge. The answer library IS the defensible asset. Generic CRMs can't replicate it because it requires domain-specific RFP data that only accumulates through use.
Gate
Before executing:
- Every agent named — YES (4 agents: Sales Team, Bid Manager, RFP Agent, Sales Dev Agent)
- Every instrument named — YES (6 instruments with thresholds and outcome map links)
- Feedback loops explicit — YES (4 loops at different frequencies)
- Agent-to-agent handoffs documented — YES (Sales Dev → CRM → RFP)
- Compound flywheel articulated — YES (the library IS the moat)
- Key insight: the answer library is the defensible asset that compounds with usage
Context
- Capability Map — Previous: what can we do
- A&ID Template — The empty pattern
- Spec: Protocols — RFP workflow detail, build sequence