Site Playbook
How do you go from a pain statement to a live page that measures its own performance?
Five stages. Each has a gate. Pass the gate or go back.
Overview
| Field | Value |
|---|---|
| Purpose | Build instrumented conversion pages from validated pain |
| Trigger | New venture, new ICP, new landing page needed |
| Frequency | Per page (typically quarterly) |
| Duration | 4-8 hours for first ship |
| Owner | Founder (judgment) + AI (execution) |
| Output | Live page with analytics wired before launch |
Agents
| Agent | Stage | Job |
|---|---|---|
| Human (founder) | All | Positioning, judgment, taste, design direction |
| AI (research) | 1-2 | Research, positioning, narrative copy |
| AI (code generation) | 3 | Page generation, component building |
| Positioning pipeline | 1 | Ogilvy research + Sutherland psychology |
| Landing page builder | 3 | Demand-side questions + design thresholds |
| Rendering verifier | 3 | Post-build visual verification |
Tools
| Tool | Stage | Job |
|---|---|---|
| AI assistant | 1-2 | Research, positioning, narrative copy |
| Code generator | 3 | Page generation, component building |
| Product analytics | 4-5 | Event tracking, session replay, feature flags |
| Web analytics | 4-5 | Privacy-first traffic measurement |
| Hosting platform | 3-5 | Deploy, preview, production |
Process
Validate → Story → Design → Instrument → Learn
↓ ↓ ↓ ↓ ↓
positioning narrative page analytics weekly
pipeline arc map build wiring PDCA
Each stage inherits context from the previous. Skip a stage and the page ships without foundation.
Stage 1: Validate
Run a positioning pipeline. Six steps produce a creative brief with nine fields:
| Field | What It Answers |
|---|---|
| Problem | What pain exists? |
| Insight | What truth have you earned? |
| One Thing | The single benefit (not feature) |
| Proof | What convinces a skeptic? |
| Frame | Category, reframe, comparison |
| Action | What do they do next? |
| Feeling | What emotion do they leave with? |
| Position | Where you sit in their mind |
| Loop | Which VVFL station this serves |
Gate: Can you compress the nine fields into a creative brief that fits on one index card? If not, your positioning lacks clarity. Go back.
Stage 2: Story
Input: Creative brief from Stage 1
Map the Tight Five to page sections. Every section earns its place by serving one of the five priorities.
| Tight Five | Page Section | Job |
|---|---|---|
| Purpose | Hero + Enemy | Why this exists, what you fight |
| Principles | Philosophy | What truths guide the approach |
| Platform | How It Works | The mechanism diagram |
| Perspective | Social Proof | What you see that others miss |
| Performance | CTA | The metric commitment |
Narrative Arc
The homepage follows a specific emotional beat sequence. Your page should too.
| Beat | Emotional Intent | Design Expression |
|---|---|---|
| Hero | Confidence + urgency | High contrast, large type, single CTA |
| Enemy | Tension + recognition | Darker palette, sharp edges, pain language |
| Solution | Relief + clarity | Lighter palette, whitespace, mechanism diagram |
| Proof | Trust + validation | Logos, testimonials, specific numbers |
| CTA | Momentum + action | Accent color, generous padding, action verb |
Gate: Read the section headings aloud. Does a stranger know what the page offers, who it serves, and what to do? If not, rewrite the headings.
Stage 3: Design
Input: Narrative arc from Stage 2 + creative brief from Stage 1
Answer Singer's six questions before touching code:
- When does this need arise?
- What progress do they want?
- What do they use now instead?
- What's the hidden objection?
- What category are we compared to?
- What's the headline promise?
Follow the landing page bible at /docs/software/products/design/design-system/uiux-landing-page.md. The measurable thresholds:
| Check | Threshold |
|---|---|
| Contrast | 4.5:1 minimum |
| Touch targets | 44x44px minimum |
| CTA | Above fold, unique color, 44px+ height |
| LCP | Under 2.5s |
| Value prop | Visible without scrolling at 768px |
| Five-second test | 5 people correctly identify what, who, action |
Post-build: Run a rendering verification. "Present in the DOM" is not "visible to a human."
Gate: All six thresholds pass with cited evidence. Screenshot proof for the five-second test.
Stage 4: Instrument
Wire three metrics before launch. Not after.
| Metric | What It Measures | Tool | Event |
|---|---|---|---|
| Attention | Do they stay? | Plausible | Scroll depth at 25/50/75/100% |
| Belief | Do they engage? | PostHog | Section visibility, proof clicks |
| Conversion | Do they act? | Either | Form submit, waitlist join, CTA click |
PostHog event schema — define events before launch so Week 1 data is clean:
page_view → {page, source, utm_*}
section_visible → {section_id, time_on_page}
cta_click → {cta_id, section, variant}
form_start → {form_id}
form_submit → {form_id, fields_count}
scroll_depth → {depth_percent, time_on_page}
Plausible goals:
Goal 1: CTA click (CSS selector: .cta-primary)
Goal 2: Form submit (page: /thank-you or custom event)
Goal 3: Scroll 75% (custom event via JS)
Gate: Open the page in an incognito window. Click the CTA. Check the analytics dashboard. If the event does not appear within 30 seconds, the instrumentation is broken.
Stage 5: Ship and Learn
Weekly PDCA
| Day | Action |
|---|---|
| Monday | Review 10-15 session replays. Form one hypothesis. |
| Tuesday | Implement one atomic change. |
| Wednesday-Sunday | Observe ~150 visitors. |
| Next Monday | Directional signal: keep, revert, or iterate. |
One change per week. Not three. Atomic means isolated — you know exactly what caused the signal change.
Monthly
Run a five-second test with 5 people. If fewer than 4 correctly identify what, who, and action, the page has drifted.
Quarterly
Full audit against the marketing protocols matrix. Score every protocol dimension. Compare to last quarter.
Prompts
Five prompts. Each inherits context from the previous. The output of each becomes the input of the next.
Pain Statement → Creative Brief → Narrative Arc → Page Spec → Event Schema → PDCA Log
Prompt 1: Extract
Role: Positioning strategist (Ogilvy research + Sutherland psychology)
Input: A pain statement, product description, and any available proof points.
Constraint: Output must fit the nine-field creative brief format. No field left empty. "One Thing" must be a benefit, not a feature.
Given this pain statement and product:
Pain: [paste pain statement]
Product: [paste product description]
Proof: [paste any evidence — testimonials, metrics, usage data]
Generate a creative brief with exactly these nine fields:
1. Problem — The pain in one sentence
2. Insight — The earned truth behind it
3. One Thing — Single benefit (not feature)
4. Proof — What convinces a skeptic
5. Frame — Category + reframe + comparison
6. Action — What they do next
7. Feeling — The emotion they leave with
8. Position — Where you sit in their mind
9. Loop — Which feedback loop type this serves
(runaway = extractive | corrective = controlling | virtuous = compounding)
Rules:
- "One Thing" must pass: "This helps me ___" not "This has ___"
- "Proof" must be verifiable, not aspirational
- "Frame" must name what you're compared to and why you're different
Output format: JSON with nine fields. Each value is one sentence.
Quality check: Read the "One Thing" aloud. If it sounds like a feature list item, rewrite as the outcome the user gets.
Prompt 2: Story
Role: Narrative architect (Tight Five mapping + emotional beat design)
Input: Creative brief JSON from Prompt 1.
Given this creative brief:
[paste JSON from Prompt 1]
Map it to a page narrative using the Tight Five framework:
| Tight Five | Page Section | Section Job |
|------------|--------------|-------------|
| Purpose | Hero + Enemy | Why this exists, what you fight |
| Principles | Philosophy | What truths guide the approach |
| Platform | How It Works | The mechanism diagram |
| Perspective| Social Proof | What you see others miss |
| Performance| CTA | The metric commitment |
For each section, generate:
1. Section headline (6 words max)
2. Subhead (15-25 words — explains the how)
3. Body copy (3-5 sentences — earns its place or gets cut)
4. Emotional beat (confidence / tension / relief / trust / momentum)
5. CTA text if applicable (action verb + benefit)
Rules:
- Hero headline must answer: What is it? Who is it for? What do I do?
- Enemy section names a specific enemy, not a vague problem
- How It Works must be diagrammable — if you can't draw it, rewrite it
- CTA copy uses action verb + benefit ("Get the framework" not "Submit")
- No section exceeds 100 words of body copy
Output format: Markdown table with five sections, each with the five sub-fields.
Quality check: Read just the headlines in order. Does a stranger understand the offer in 10 seconds?
Prompt 3: Build
Role: Component architect (React + design system discipline)
Input: Narrative arc from Prompt 2 + design preset choice.
Given this narrative arc:
[paste table from Prompt 2]
Design preset: [cinematic | dense | minimal]
Generate a page specification:
For each section, define:
1. Component name (PascalCase)
2. Layout pattern (hero-centered | split-screen | full-bleed | card-grid)
3. Required elements (headline, subhead, body, image/diagram, CTA button)
4. Design tokens to apply:
- Background: bg-ink | bg-chalk | bg-accent
- Text: text-chalk | text-ink | text-muted
- Spacing: section padding, element gaps
5. Responsive behavior at 375px mobile viewport
Rules:
- One CTA per viewport. CTA color appears on zero non-clickable elements.
- Every image must have alt text that describes the content, not the file name
- Touch targets: 44x44px minimum
- Hero must answer what/who/action without scrolling at 768px
- Diagram sections must degrade gracefully to a numbered list on mobile
Output format: Structured specification per component — not code.
Quality check: For each section, answer: "If I remove this section, does the page still convert?" If yes, the section is not earning its place.
Prompt 4: Wire
Role: Analytics engineer (PostHog + Plausible instrumentation)
Input: Page specification from Prompt 3.
Given this page specification:
[paste specification from Prompt 3]
Generate an instrumentation plan with three metric types:
1. ATTENTION metrics (do they stay?)
- Scroll depth events at 25%, 50%, 75%, 100%
- Time on page
- Section visibility (which sections enter viewport)
2. BELIEF metrics (do they engage?)
- Clicks on proof elements (testimonials, diagrams, expandable sections)
- Hover/interaction on mechanism diagrams
- Secondary page navigation (clicked through to learn more)
3. CONVERSION metrics (do they act?)
- Primary CTA clicks
- Form starts vs form completions
- Thank-you page views
For each event, specify:
- Event name (snake_case)
- Properties object (key: value pairs)
- Trigger condition (when does this fire?)
Also generate:
- Three Plausible goals with CSS selectors or page paths
- One primary hypothesis about which section drives conversion
- The minimum visitor count needed for a directional signal (~150)
Output format: PostHog event schema as a table + Plausible goals as a list + hypothesis as one sentence.
Quality check: Open the page. Click every interactive element. Does each click map to exactly one event? Missing events = blind spots.
Prompt 5: Learn
Role: Growth analyst (PDCA cycle design)
Input: Instrumentation plan from Prompt 4 + first week of analytics data.
Given this instrumentation plan and data:
[paste schema from Prompt 4]
[paste first week's data summary — or describe expected patterns]
Generate a PDCA learning cycle:
PLAN:
- State one hypothesis about what's working or broken
- Name the specific metric that tests this hypothesis
- Define what "signal" looks like (threshold or direction)
DO:
- Propose one atomic change (single variable)
- Specify what stays constant (control)
- Estimate required sample size for directional signal
CHECK:
- Define the observation period (days)
- State what "keep" vs "revert" looks like
- Identify confounding variables to watch
ACT:
- If keep: what's the next hypothesis?
- If revert: what alternative hypothesis does the failure suggest?
- Update the narrative arc if the data contradicts Stage 2 assumptions
Rules:
- One change per cycle. Not three.
- "Atomic" means you can attribute any signal change to exactly this change
- If sample size is under 150, extend the observation period — don't add variables
- Never run two tests simultaneously on the same page
Output format: Four-section PDCA document with specific numbers, not vague directional language.
Quality check: Can you explain the hypothesis to a non-technical person in one sentence? If not, the hypothesis is too complex for an atomic test.
Automation Mapping
| Prompt | Automation Level | What It Adds |
|---|---|---|
| 1 | AI-assisted | Full positioning pipeline, not just the brief |
| 2 | Manual | Narrative mapping requires earned conviction |
| 3 | AI-assisted | Demand-side questions + design thresholds |
| 4 | Manual | Analytics wiring is engineering work |
| 5 | Manual | PDCA requires human judgment on data |
Solo Founder Path
- Run Prompt 1 with an AI assistant. Save the JSON.
- Feed JSON into Prompt 2. Save the narrative table.
- Feed narrative + preset choice into Prompt 3. Hand specification to a code generator.
- After the page exists, run Prompt 4. Wire events before launch.
- After Week 1, run Prompt 5. Begin the PDCA cycle.
Time to first ship: 4-8 hours for a simple conversion page.
Artifacts
| Stage | Artifact | Demand Served |
|---|---|---|
| 1 | Creative brief (9 fields) | Trust (positioning clarity) |
| 2 | Narrative arc + section copy | Trust + Conversion |
| 3 | Instrumented conversion page | Conversion |
| 4 | PostHog event schema + Plausible goals | Measurement |
| 5 | PDCA log + hypothesis | Improvement |
Outcomes
| Metric | Target | Measurement |
|---|---|---|
| Five-second test | 4/5 correct | 5 people, pre-launch |
| Scroll depth 75% | >40% of visitors | Plausible, weekly |
| CTA click rate | >2% | PostHog, weekly |
| Time to first ship | 4-8 hours | Clock |
Failure Modes
| Failure | Signal | Fix |
|---|---|---|
| Skip Stage 1 | Page ships without positioning clarity | Go back. Run the positioning pipeline |
| No instrumentation | No data after Week 1 | Wire events before launch, never after |
| Multiple changes per week | Can't attribute signal to cause | One atomic change per PDCA cycle |
| Narrative drift | Five-second test fails monthly | Re-run Stage 2 from original creative brief |
| Vanity metrics | High traffic, zero conversion | Measure belief + conversion, not just attention |
Context
- Marketing Protocols — The work chart this playbook serves
- Marketing Performance — Where metrics land
- Specimen — This playbook applied to the Dreamineering homepage
- Demand Validation — The demand card this pipeline fills
Links
- PostHog — Product analytics, session replay, feature flags
- Plausible — Privacy-first web analytics
- v0 — AI code generation for UI components
- PostHog Docs — Custom Events — Event implementation reference
- Plausible Docs — Custom Goals — Goal setup guide
Questions
How do you know when a page has earned the right to exist — and when it should be killed?
- At what visitor count does weekly PDCA produce reliable signals, and what do you do before that threshold?
- Which of the three metrics (attention, belief, conversion) is the leading indicator for your specific audience?
- If Prompt 1's creative brief is wrong, how far down the chain does the error propagate before it becomes visible?
- When does a prompt chain become rigid enough to scale but flexible enough to handle a genuinely novel product?