Skip to main content

How do agents orchestrate — and what instruments verify the work?

AGENT & INSTRUMENT DIAGRAM: SALES CRM & RFP
════════════════════════════════════════════════════════════

AGENTS (Yang — who applies force)
─────────────────────────────────────────────────────────
Sales Team (Human) ····· CRM usage, deal progression, RFP decisions
Bid Manager (Human) ···· Go/No-Go, answer review, submission
RFP Agent (AI) ·········· Auto-fill, answer generation, type detection
Sales Dev Agent (AI) ···· Upstream demand gen → deals enter pipeline

INSTRUMENTS (Yin — what verifies and rewards)
─────────────────────────────────────────────────────────
Auto-Fill Rate ·········· Answer library health (target: 70%+)
Confidence Scorer ······· Answer quality (tiers: 40-80%)
Compound Rate ··········· Library growth velocity ("+X%/week")
Go/No-Go Checklist ····· Bid qualification gate (70%+ = Go)
Win Rate Tracker ······· The north star (target: >30%)
Pipeline Dashboard ····· Deal progression health

PROTOCOLS (data + value + decision flows)
─────────────────────────────────────────────────────────

┌──────────────────────────────────────────────┐
│ RFP WORKFLOW │
│ │
│ RECEIVE ──→ GO/NO-GO ──→ BID PREP ──→ │
│ │ │ │ │
│ RFP Type Go/No-Go Auto-Fill │
│ Detection Checklist Rate │
│ │ │ │ │
│ ──→ SME REVIEW ──→ SUBMIT ──→ │
│ │ │ │
│ Confidence Win Rate │
│ Scorer Tracker │
│ │ │ │
│ └── FEEDBACK ┘ │
│ │ │
│ Approved answers → Library │
│ Library → next Auto-Fill │
└──────────────────────────────────────────────┘

FEEDBACK LOOPS
─────────────────────────────────────────────────────────

Loop 1: ANSWER COMPOUND (per RFP)
┌─ Answer question → SME approves → Library grows → next RFP auto-fills ─┐
└────────────────────────────────────────────────────────────────────────┘

Loop 2: QUALITY (per answer)
┌─ AI generates → Confidence < 40%? → Flag for SME → Review → Approve ──┐
└────────────────────────────────────────────────────────────────────────┘

Loop 3: BID QUALIFICATION (per RFP)
┌─ RFP arrives → Go/No-Go score → <50%? → No-Go (save 20-40h) ─────────┐
└────────────────────────────────────────────────────────────────────────┘

Loop 4: WIN RATE (monthly)
┌─ Win/loss analysis → Why lost? → Update Go/No-Go weights + library ───┐
└────────────────────────────────────────────────────────────────────────┘

HANDOFF: SALES DEV → CRM → RFP
─────────────────────────────────────────────────────────

Sales Dev Agent CRM Pipeline RFP Workflow
────────────── ──────────── ────────────
Research → Score → Lead → Qualified → Discovery →
Compose → Sequence → [deal Proposal → Qualification →
Discovery call → created] Negotiation → Preparation →
Won / Lost Review → Submit

UPSTREAM: Sales Dev Agent generates demand → deals enter CRM pipeline
MIDSTREAM: CRM tracks deals → RFP triggered at Proposal stage
DOWNSTREAM: RFP answers compound → improves future bid quality → win rate

════════════════════════════════════════════════════════════

Agent Roster

AgentTypeForce AppliedVerification
Sales TeamHumanCRM data entry, deal progression, relationship building, follow-upsPipeline dashboard, activity timeline
Bid ManagerHumanGo/No-Go decisions, answer quality review, submission approvalGo/No-Go checklist, confidence scoring
RFP AgentAIAuto-fill from library, answer generation, document classificationAuto-fill rate, confidence scorer, compound rate
Sales Dev AgentAIUpstream demand gen (separate PRD) — feeds discovery calls into CRMReply rate, scored pipeline (tracked in its own PRD)

Instrument Registry

InstrumentMeasuresThresholdAction on FailOutcome Map Link
Auto-Fill Rate% of answers auto-populated from libraryTarget: 70%+ (improves with library size)Process more RFPs to grow libraryEnables O1 (win rate) via speed
Confidence ScorerAnswer quality tier80%+ = auto-approve, 40-80% = SME review, <40% = flagRoute to Review Queue for human judgmentEnables O4 (10+ approved answers)
Compound RateLibrary growth velocityPositive week-over-week growthIf flat: not enough RFPs being processed, or answers not being approvedEnables O1 (win rate) via compounding
Go/No-Go ChecklistBid qualification score70%+ = Go, 50-70% = Review, <50% = No-GoSave 20-40h of bid prep on unwinnable RFPsEnables O1 (win rate) by filtering
Win Rate TrackerBids won / bids submittedTarget: >30% (current: 22%)If declining: review Go/No-Go weights, answer quality, bid strategyNorth star — O1 directly
Pipeline DashboardDeal count, value, stage distributionActive deals >0, pipeline value >$100KIf empty: CRM is a rolodex, not a sales toolMeasures deal creation activation

Feedback Loop Quality

LoopFrequencySensorActuatorLatency
Answer CompoundPer RFPAuto-fill rate changeLibrary growth, next RFP fasterDays
QualityPer answerConfidence scoreRoute to SME review or auto-approveMinutes
Bid QualificationPer RFPGo/No-Go scoreCommit resources or decline30 minutes
Win RateMonthlyWin/loss ratioUpdate Go/No-Go weights, library strategy30 days

The Compound Flywheel

From the Value Stream Map — the core insight is that each RFP answered makes the next one faster:

  RFP 1: 40h prep (no library)     → 10 answers approved
RFP 5: 20h prep (50% auto-fill) → 50 answers in library
RFP 10: 12h prep (70% auto-fill) → 100+ answers
RFP 50: 4h prep (90% auto-fill) → library IS the moat

Competitor catch-up cost: process 50 RFPs. Time: 12+ months.

This is the compound rate applied to knowledge. The answer library IS the defensible asset. Generic CRMs can't replicate it because it requires domain-specific RFP data that only accumulates through use.

Gate

Before executing:

  • Every agent named — YES (4 agents: Sales Team, Bid Manager, RFP Agent, Sales Dev Agent)
  • Every instrument named — YES (6 instruments with thresholds and outcome map links)
  • Feedback loops explicit — YES (4 loops at different frequencies)
  • Agent-to-agent handoffs documented — YES (Sales Dev → CRM → RFP)
  • Compound flywheel articulated — YES (the library IS the moat)
  • Key insight: the answer library is the defensible asset that compounds with usage

Context