Advertising
How do you turn attention into qualified pipeline without burning money on the wrong audience?
Advertising is the amplifier. Without a working funnel and a validated ICP, amplification just makes bad targeting louder. The job is not "run ads" — it's get more customers without becoming a marketing expert.
Overview
| Attribute | Value |
|---|---|
| Purpose | Generate qualified awareness and pipeline through paid channels |
| Trigger | Product-market fit validated, organic growth insufficient for targets |
| Frequency | Continuous execution, weekly optimization, monthly strategy review |
| Duration | Ongoing campaigns with defined test cycles |
| Owner | Marketing/Growth (human strategy, AI execution) |
| Output | Qualified leads in pipeline, channel ROI data, audience insights |
Human Role: Budget allocation, channel strategy, creative direction, messaging AI Role: Targeting optimization, creative variants, measurement, bid management Spectrum: AI-Led (creative AI-assisted, measurement AI-led)
Prerequisites
Tools Required
| Tool | Purpose | Access |
|---|---|---|
| Ad platforms | Campaign execution | LinkedIn Ads, Google Ads, Meta |
| Analytics | Attribution, conversion tracking | Platform analytics + CRM |
| CRM | Lead capture, pipeline tracking | Sales CRM |
| Creative tools | Ad asset creation | Design tools + AI generation |
| Budget tracker | Spend vs performance | Spreadsheet or ad platform |
Knowledge Requirements
- Validated ICP with psycho-logic profile
- Working conversion funnel with stage metrics
- Product positioning and value proposition
- Competitive advertising landscape
Inputs
| Input | Source | Required? |
|---|---|---|
| ICP definition | ICP Framework | Yes |
| Pipeline gap | Funnel Engineering — leads needed | Yes |
| Budget | Business plan / quarterly allocation | Yes |
| Creative assets | Marketing / content library | Yes |
| Historical ad data | Past campaigns | If available |
Process
Phase 1: Channel Selection
Duration: 2-4 hours quarterly Responsibility: Human strategy
Not all channels are equal. Match the channel to where your ICP spends attention.
| Channel | Best For | Cost Range | Speed | B2B Fit |
|---|---|---|---|---|
| LinkedIn Ads | Decision-maker targeting by title/industry | $5-15 CPC | Medium | High |
| Google Search | High-intent buyers actively searching | $2-20 CPC | Fast | High |
| Google Display | Retargeting, brand awareness | $0.50-3 CPC | Slow | Medium |
| Meta (Facebook/Instagram) | B2C, some B2B retargeting | $1-5 CPC | Medium | Low-Medium |
| Content syndication | Gated content for lead gen | $20-100 CPL | Medium | High |
| Podcast/newsletter sponsorship | Niche audience, trust transfer | $500-5000/placement | Slow | High |
| Event sponsorship | Face-to-face, high intent | $1000-50000 | Slow | High |
For B2B SaaS selling to construction/solar teams: Start with LinkedIn (decision-maker targeting) + Google Search (intent capture). Add content syndication when you have a proven lead magnet.
Step 1.1: Confirm Before Spending
- ICP definition is current and validated
- Landing page converts (>3% visitor-to-lead)
- Lead qualification process handles volume
- Tracking is configured end-to-end (ad click → lead → deal → revenue)
- Budget allocated for minimum viable test (usually $2-5K per channel)
Phase 2: Campaign Architecture
Duration: 4-8 hours per campaign Responsibility: Human creative direction, AI variants
Step 2.1: Message Framework
Every ad answers three questions for the ICP:
| Question | Your Answer | Example |
|---|---|---|
| What's the pain? | Name their specific problem | "40% of RFP time is rebuilding answers from scratch" |
| What's the promise? | Specific outcome, not features | "Cut RFP response time by 70%. Your answers compound." |
| What's the proof? | Evidence that it works | "Auto-fill rate: 70%. Library grows with every bid." |
Step 2.2: Campaign Structure
| Layer | Purpose | Example |
|---|---|---|
| Campaign | Business objective | "Construction CRM pipeline" |
| Ad set | Audience segment | "NZ construction, 10-50 employees, operations manager" |
| Ad | Creative variant | Variant A: pain-led. Variant B: proof-led. Variant C: social proof. |
Step 2.3: Landing Page Alignment
The ad and landing page must tell the same story. Mismatch = bounce.
| Ad Message | Landing Page Must | Kill If |
|---|---|---|
| Pain statement | Mirror the exact pain language | Landing talks about features, not pain |
| Specific promise | Deliver on the promise in headline | Promise is vague or different |
| Proof point | Show the evidence immediately | No proof visible above fold |
| CTA | Match the ask (demo, trial, content) | CTA mismatches ad expectation |
Phase 2 Output: Campaign architecture with message framework, audience targeting, creative variants
Phase 3: Execution and Optimization
Duration: 30 min/day monitoring, 1 hour/week optimization Responsibility: AI-led optimization, human budget decisions
Step 3.1: Launch Protocol
| Day | Action | Expected |
|---|---|---|
| 0 | Launch with minimum viable budget | Impressions flowing |
| 1-3 | Monitor delivery, check targeting | CTR > 0.5% (search), > 0.1% (display) |
| 7 | First optimization — pause underperformers | Top 50% of ads running |
| 14 | Conversion check — leads coming through? | CPL within target |
| 30 | Full assessment — ROAS calculation | Decision: scale, optimize, or kill |
Step 3.2: Optimization Levers
| Problem | Signal | Fix |
|---|---|---|
| Low impressions | Budget too low or audience too narrow | Increase budget or broaden targeting |
| Low CTR | Message doesn't resonate | Test new creative, refine headline |
| High CTR, low conversion | Landing page misalignment | Fix landing page or qualify the click |
| High CPL | Competition or wrong channel | Negotiate bidding, test new channel |
| Leads but no pipeline | Wrong audience or weak qualification | Tighten targeting, check ICP match |
Step 3.3: A/B Testing Protocol
Test one variable at a time. Run until statistically significant (minimum 100 conversions per variant, or 2 weeks).
| Variable | What to Test | Measurement |
|---|---|---|
| Headline | Pain vs promise vs proof | CTR |
| Creative | Image vs video vs carousel | CTR + conversion |
| Audience | Industry segment, company size, title | CPL + lead quality |
| Offer | Demo vs trial vs content download | Conversion rate + pipeline |
| Landing page | Long vs short, video vs text | Conversion rate |
Phase 3 Output: Optimized campaigns with conversion data
Phase 4: Measurement
Duration: 1 hour monthly Responsibility: AI data, human interpretation
Step 4.1: Metrics Hierarchy
| Level | Metric | Target | Why It Matters |
|---|---|---|---|
| Vanity | Impressions, clicks | Awareness | Don't optimize for these alone |
| Leading | CTR, CPL, conversion rate | Efficiency | Early signals of campaign health |
| Lagging | Pipeline created, deals won | Revenue | The only metrics that ultimately matter |
| Unit economics | CAC, LTV:CAC ratio, payback period | Sustainability | Determines if you can scale |
Step 4.2: Channel ROI
| Channel | Spend | Leads | CPL | Pipeline | Deals Won | Revenue | ROAS |
|---|---|---|---|---|---|---|---|
| ? | ? | ? | ? | ? | ? | ? | |
| Google Search | ? | ? | ? | ? | ? | ? | ? |
| Content syndication | ? | ? | ? | ? | ? | ? | ? |
| Total | ? | ? | ? | ? | ? | ? | ? |
Target LTV:CAC ratio: >3:1. Below 3:1, you're spending too much to acquire. Above 5:1, you could be spending more to grow faster.
Step 4.3: Attribution
| Model | When to Use | Limitation |
|---|---|---|
| First touch | Which channel generates awareness? | Ignores nurture path |
| Last touch | What closes deals? | Ignores top-of-funnel |
| Multi-touch | Full journey understanding | Complex to implement |
| Self-reported | "How did you hear about us?" | People forget or simplify |
Use self-reported + multi-touch together. Neither alone tells the truth.
Phase 4 Output: Channel ROI analysis, budget reallocation recommendations
The SME Problem
Why does advertising fail for the businesses that need it most?
The advertising ecosystem is built for volume buyers with dedicated teams. An SME owner running ads while also running their business faces a stacked deck:
| What They Need | What They Get | Gap |
|---|---|---|
| Clear ROI | Platform self-reported metrics | Can't verify if ads worked |
| Right audience | Algorithmic suggestions they can't evaluate | Pay for reach, hope for relevance |
| Creative that converts | One static ad they ran once | No volume for A/B testing |
| Budget efficiency | Agency minimums or self-serve guesswork | Expertise locked behind spend thresholds |
| Cash flow timing | Pay upfront, revenue in 30-90 days | Growth windows close while waiting |
The job to be done is not "run ads." It's get more customers without becoming a marketing expert. Every intermediary in the current stack is hired to solve a different job — the agency's job is to spend budget, the platform's job is to win auctions. The buyer's job falls between the cracks.
Meta sees this. Zuckerberg describes the endgame: businesses state objectives and budget, AI delivers results. Every business gets a messaging agent for support and sales — already standard in Thailand and Vietnam. Small, talent-dense teams outsource non-core functions to AI.
The catch: Meta's agent serves Meta's margin. The SME trades one black box for another. The platform automates the buying, but still grades its own homework. Verification stays internal. The independent, buyer-aligned version — where an agent works across platforms with on-chain verification — doesn't exist yet. That's the gap.
Where SMEs Sit
Most SME buyers are Problem Aware — they feel the pain but can't name the cause. Match message to awareness:
| Awareness | What They Think | What To Say |
|---|---|---|
| Problem Aware | "Ads don't work for us" | Name the real cause — platform self-reporting, wrong audience signals |
| Solution Aware | "AI tools exist but seem complex" | Show the job match — agent handles the 90%, you handle the 10% |
| Product Aware | "Agents sound promising, unproven" | Remove the hidden objection — verified results, pay-for-outcome |
The JTBD interview reveals which moment they're in. The trigger: growth plateaus, network maxed out, word of mouth hits ceiling. The hidden objection: "I'll waste money and won't know if it worked."
The agent-native shift addresses the job directly. An agent handles targeting, bidding, creative variants, and measurement — the 90% that's mechanical. The business owner handles strategy, budget, and judgment — the 10% that requires context. BOaaS economics: enterprise-grade precision at self-serve prices.
Pre-agent checklist still applies. The agent needs the same inputs you'd give an agency: validated ICP, working funnel, clear value proposition. Automation amplifies signal and noise equally.
The DePIN Disruption
The advertising industry is being restructured by crypto incentives and verified data.
| Traditional | DePIN-Enabled |
|---|---|
| Platform captures data (Meta, Google) | Community captures data (375ai sensors) |
| 50-60% of spend reaches publisher | 90%+ reaches publisher (on-chain settlement) |
| 30-90 day payment cycles | Sub-second settlement (Sui) |
| Modelled audience estimates | Verified physical presence (GEODNET + 375ai) |
This matters for sales because: verified presence data eliminates wasted ad spend. Instead of targeting "people who Google might think are in construction," target "people verified to be at construction sites." The targeting accuracy difference is the margin difference.
DePIN Advertising Workflow
| Step | Traditional | DePIN | Advantage |
|---|---|---|---|
| Targeting | Modelled demographics | Verified location + activity | 3-5x accuracy improvement |
| Delivery | Platform intermediary | Direct to publisher | 30-40% cost reduction |
| Measurement | Platform self-reporting | On-chain verification | Fraud elimination |
| Payment | Net-30 to net-90 | Sub-second settlement | Cash flow improvement |
Technology stack:
- Advertising Platform — Full tech stack with Web3 layer
- 375ai — Verified physical presence for advertising
- Alkimi Exchange — Decentralized ad exchange on Sui
Outputs
| Output | Format | Destination |
|---|---|---|
| Leads from paid channels | CRM contacts with source attribution | Lead Qualification |
| Channel ROI data | Monthly report | Budget planning |
| Audience insights | Targeting refinements | ICP updates |
| Creative performance | A/B test results | Marketing playbook |
Success Criteria
Performance Metrics
| Metric | Target | Timeframe |
|---|---|---|
| Cost per lead (CPL) | <$50 for B2B SaaS | Monthly |
| Lead-to-pipeline rate | >20% of paid leads enter pipeline | Monthly |
| LTV:CAC ratio | >3:1 | Quarterly |
| ROAS (Return on Ad Spend) | >3:1 | Monthly |
| Payback period | <6 months | Per cohort |
Failure Modes
| Failure | Symptom | Diagnosis | Solution |
|---|---|---|---|
| Burning budget | High spend, no pipeline | Wrong audience or weak landing page | Pause, fix targeting or conversion |
| Vanity optimization | Great CTR, zero leads | Clicks aren't buyers | Optimize for conversion, not clicks |
| Channel addiction | 100% budget in one channel | Risk concentration | Diversify when one channel proven |
| Creative fatigue | Performance declining over weeks | Same ads, audience saturated | Refresh creative every 4-6 weeks |
| Attribution confusion | Can't tell what's working | No tracking or broken attribution | Fix end-to-end tracking before spending more |
Context
- Advertising Industry — Trust problem, Sui stack, buyer analysis
- Jobs To Be Done — The framework that glues provider to consumer
- Validate Demand — Awareness levels drive messaging
- Funnel Engineering — Where advertising leads enter the pipeline
- ICP Framework — Who to target
- Agent Commerce — Standards for autonomous AI transactions