Skip to main content

Sales Dev Agent Spec

How do we think slowly so the agent gets smarter and the outreach gets cleaner each cycle?

Build Contract

#FeatureFunctionOutcomeJobState
1Prospect ResearchDocumentIntelligence + industry data → prospect profile5 prospects in 60s, not 3 hoursResearchGap
2Lead ScoringICP fit via sales-forecasting + explore-exploitKnow which 5 of 20 deserve a callResearchDormant
3Outreach ComposerPersonalized draft from research + ICP pain mappingMessages worth reading, not templatesComposeGap
4SPCL Quality GateScore draft before human reviewNever send robot-speakComposeDormant
5Follow-up SequencerMulti-touch cadence via CRM tasks + adaptersFollow-ups on schedule, not memoryDeliverGap
6Email DeliverySend via Resend with tracking (config only)Know: delivered, opened, clickedDeliverBuilt
7LinkedIn ActionsReal API + OAuth replacing mock clientMulti-channel, one systemDeliverPartial
8CRM Activity LoggingAuto-log every touchpointFull history, no data entryDeliverDormant
9Channel PerformanceSend/open/reply by channel, segment, typeLearn what convertsLearnGap
10Explore-ExploitTest variants, shift to what convertsOutreach improves itselfLearnDormant
11Pipeline AccelerationAgent-sourced deals, touch-to-meeting timeProve revenue impactLearnDormant
12Agent ProfileSOUL memory with sales playbookContext, personality, learningCoreGap
13Outreach WorkChartResearch → Score → Compose → Sequence → MeasureEnd-to-end orchestrationCoreGap

Principles

What truths constrain how this agent operates?

The Job: When the kill date is 28 days away and there are zero paying customers, help the founder find qualified prospects, score them, compose personalized outreach, and sequence follow-ups — so human time goes to discovery calls, not research and admin.

TriggerCurrent FailureDesired Progress
Need 5 discovery calls this monthLinkedIn browsing, scattered email, no systemAgent surfaces, scores, drafts, sequences
RFP opportunity identifiedCold email from scratch each timePersonalized message from ICP data
Discovery call completedNotes in notebook, follow-up forgottenAuto-log, create tasks, update probability
No pipeline visibilityGut feeling about who's warmScored leads with recommendations
Outreach unansweredSame message to everyoneExplore-exploit adapts channels and copy
Agency needs qualified leadsVolume providers, unqualified contactsPre-qualify through conversation before handover

Hidden objection: "AI outreach feels spammy." Agent composes — human reviews every message before send. Research is automated, relationships are not.

Why now: The product is ready. The demand generation isn't. Nine algorithms, four adapters, a CRM, an RFP tool — all built, none generating demand. The outreach IS the commissioning test.

The convergence: Eddy Whatt at Sneakers Media identified the same pattern from the buy side. Every lead gen provider delivers volume, not quality. Lead gen is a data flow problem dressed as a marketing problem:

OLD: Advertiser → Agency → Lead Gen Provider → Database → Leads (unqualified)
NEW: Advertiser → Agent → Pre-Qualified Leads (with conversation context)

The NZ angle: Small market (5M) makes lead gen quality critical. Relationship-based culture means bad leads damage agency reputation. Proving ground.


Performance

How do we know it's working?

Priority Score

DimensionScoreEvidence
Pain49 algorithms x 0 UI = 0 value. 28 days to kill date. 3hrs/week manual research. Sneakers confirms volume-not-quality.
Demand4Sneakers independently identified same problem. 5 test clients (HRV, Kitchen Studio, Asset Factory, BizCover, MAS). Internal dogfood demand.
Edge3Three Flows convergence. Existing platform (9 algos, CRM, adapters) = composition advantage. No proprietary data yet — edge grows with usage.
Trend5AI SDR is hottest B2B SaaS category. Every lead gen provider will be AI-native within 2 years.
Conversion2CPL model clear. No pricing tested, no pilot signed, no invoice sent. Path plausible, unproven.
Composite480Strong candidate. Demand and trend validated, conversion needs proof.

Success Criteria

Functional:

#CriterionVerificationJob
F1Prospect profile from company name in <60sTimed testResearch
F210 prospects ranked by ICP fit with reasoningOutput checkResearch
F3Draft scores >3.5 SPCL before human reviewAutomated gateCompose
F4Follow-up creates CRM tasks with correct timingCRM checkDeliver
F5Email delivery + open tracking end-to-endResend webhookDeliver
F6Channel dashboard shows real data within 24hData checkLearn

Outcome (Kill: 2026-03-24):

#CriterionTargetNow
O1Prospects researched and scored/week>= 100
O2Reply rate on agent-drafted outreach>= 15%N/A
O3Scored contacts in CRM pipeline>= 200
O4Follow-up completion rate>= 90%N/A

Team-level outcomes (calls booked, pipeline value, first customer) tracked in SPO.

Business Dev

LayerDecisionAssumptionEvidence Needed
ICPWho first?Construction/solar EPC, 10+ staff, active RFPs, spreadsheet pain5 calls where they name the pain unprompted
OfferOpening message?"Win more bids. Your answer library compounds."Reply rate >15%
ChannelHow to reach?LinkedIn + personalized emailCompare reply rates by channel
ProofWhat earns a meeting?Industry RFP pain + live auto-fill demoMeeting-to-pilot >30%
ConversionWhat closes?Free 30-day pilot on one real RFPPilot-to-paid >20%

Platform

What do we control directly?

Components

This agent is one instrument in the SPO orchestra.

ComponentRoleState
CRM Contacts + DealsProspect database, deal pipelineLive
CRM Activities + TasksActivity logging, follow-up trackingLive
Sales Forecasting AlgoLead scoring, pipeline predictionWire
Explore-Exploit AlgoChannel/message optimizationWire
SPCL Scoring AlgoOutreach quality gateWire
Compound Rate TrackingEngagement velocityWire
Email Adapter (Resend)Delivery + trackingLive
LinkedIn AdapterPost/connect/message + analyticsPartial
Document IntelligenceProspect research, doc analysisLive

Build ratio: ~80% composition, ~20% new code.

Commissioning

ComponentSchemaAPIUITests%
Prospect ResearchPendingPendingPendingPending0%
Lead ScoringPendingPendingPendingPending0%
Outreach ComposerPendingPendingPendingPending0%
SPCL Quality GateDonePendingPendingPending15%
Follow-up SequencerPendingPendingPendingPending0%
Email DeliveryDoneDoneDoneDone95%
LinkedIn ActionsDoneDoneDonePending85%
CRM Activity LoggingDoneDonePartialPending50%
Channel PerformancePendingPendingPendingPending0%
Explore-ExploitDonePendingPendingPending15%
Pipeline AccelerationPartialPartialPendingPending10%
Agent ProfilePendingPendingN/APending0%
Outreach WorkChartPendingPendingN/APending0%

Protocols

How does the system coordinate?

Agent/Human Split

AGENT (AI-Led + AI-Only)              HUMAN (Human-Led + Human-Only)
──────────────────────── ────────────────────────────
Prospect research Discovery calls
Lead scoring + ranking Relationship building
Outreach drafting Message review + send
Follow-up sequencing Deal negotiation
Activity logging Go/No-Go decisions
Channel optimization Trust building

WorkChart

RESEARCH → SCORE → COMPOSE → SEQUENCE → MEASURE
↑ |
└──── Feedback improves next cycle ──────┘
ScoreRouteAgentHuman
Hot (>80%)DirectPersonalized email + LinkedIn connectReview, personalize, send
Warm (50-80%)Nurture3-touch sequenceReview sequence, approve
Cold (<50%)ContentDistribution list, no direct outreachMonthly review for warming

Build Sequence

SprintWhatEffortDepends OnAcceptance
-1Prove channels: Resend config, LinkedIn OAuth, CRM auto-log3dReal email delivered + opened. Real LinkedIn post. Both logged to CRM.
S0Agent profile + prospect research3.5dAgent Platform ETL"Acme Construction" → profile with pain points in <60s
S1Score + compose + quality gate5dS010 ranked, draft >3.5 SPCL, references prospect pain
S2Sequencer + WorkChart orchestration5dSprint -1, S1Company → end-to-end with human review step
S3Channel performance dashboard1dWebhooks liveReal send/open/reply within 24h
ParkExplore-exploit + pipeline acceleration50+ messagesVolume data needed

Horizon Model

HORIZON 1: PROVE (now → 90 days)          ← THIS PRD
Sales Dev Agent — one instrument
│ feeds

HORIZON 2: ORCHESTRATE (90d → 6 months)
Sales Process Optimisation — the orchestra
│ generates data for

HORIZON 3: PROTOCOL (6m → 18 months)
Trust Commerce — the protocol

Risks

RiskMitigation
LinkedIn app approval delayedRegister day 1 of Sprint -1. Manual-publish fallback.
AI outreach feels spammyHuman reviews every message. SPCL gate enforces floor.
Construction ICP unresponsiveTest solar EPC in parallel. Explore-exploit switches.
Agent slower than manualStart manual now. Agent replaces when ready.
Calls don't convert to pilotsProduct/pitch problem, not agent. Separate kill signal.

Kill signal: 50 messages, 0 replies, 30 days — ICP or message is wrong. Diagnose with explore-exploit data before iterating.


Players

Who creates harmony?

Job 1: Find Prospects Worth Talking To

ElementDetail
Struggling moment3 hours to find 5 prospects, then equal time on high-fit and low-fit
WorkaroundLinkedIn search, industry events, chase whoever responds
ProgressAgent surfaces 10 scored/week ranked by ICP fit with recommended action
Hidden objection"AI-found prospects won't be as good as my network referrals"
Switch triggerPipeline empty AND 4 of 5 calls with wrong buyer persona

Job 2: Say Something Worth Reading

ElementDetail
Struggling momentBlank compose window, writing same ineffective message
WorkaroundCopy template, change name, hope
ProgressAgent drafts from research + pain points, human reviews and sends
Hidden objection"AI messages sound like AI messages"
Switch trigger50 emails, 0 replies

Job 3: Follow Up Without Forgetting

ElementDetail
Struggling momentMeant to follow up Wednesday, it's Friday, window closing
WorkaroundMental note, calendar reminder, hope
ProgressAgent schedules: Day 3 email, Day 7 LinkedIn, Day 14 value-add
Hidden objection"Automated follow-up feels like spam"
Switch triggerWarm prospect goes cold because nobody followed up for 2 weeks

Job 4: Learn What Converts

ElementDetail
Struggling momentMonth of outreach, no idea what worked
WorkaroundRemember which emails "felt good"
ProgressDashboard: sent, opened, replied, meetings — by channel, segment, type
Hidden objection"Not enough volume for data to be meaningful"
Switch triggerSame approach keeps failing, no data to diagnose

Job 5: Pre-Qualify for Media Clients

ElementDetail
Struggling momentEvery lead gen campaign: volume not quality. Client sales team complains. Agency absorbs blame.
WorkaroundPad volume, rotate providers, absorb complaints
ProgressAgent qualifies via conversation before handover — leads arrive with context
Hidden objection"Every AI tool promises better leads — heard it before"
Switch triggerClient threatens to pull media budget because leads waste their time

ICP: NZ Media Agency

Archetype. Sneakers Media exemplifies the segment.

AttributeSpecification
RoleAgency owner/MD or media director
ContextNZ media agency, 5-50 staff, $500K-$20M annual media spend
GeographyNew Zealand, primarily Auckland and Wellington
Shared PainLead gen = volume, not quality. Client teams complain. Agency reputation hit.

Psycho-logic:

They SayThey Mean
"Lead quality is always poor""We get blamed after handover"
"We need to test it first""Show proof, not promises"
"Our client's sales team complains""When they complain, our contract is at risk"

The real problem: Not "higher quality leads" — leads their client's sales team doesn't complain about. The complaint loop costs more than the CPL.

Revenue model: Pilot CPL ($20-100/lead) → Platform competing with lead gen providers → Partnership (media + creative + AI/data).

Two Deployment Contexts

Same WorkChart, different jobs. One instrument, multiple ventures. BOaaS proven.

ContextICPJobRevenue Path
Internal (dogfood)Construction/solar EPCDiscovery calls for StackmatesProduct validation + first customer
External (Sneakers)NZ homeowners via HRVPre-qualified leads for agency clientsCPL from existing media budgets

Context

Questions

If lead gen is a data flow problem dressed as a marketing problem, what breaks when the data flow is better than the marketing?

  • When does the agent's qualifying conversation become the product — not a feature of the product?
  • What data from Horizon 1 (this agent) must flow to Horizon 2 (SPO) for the orchestra to tune itself?
  • If the NZ proving ground works, does it prove the model or prove the geography?