Skip to main content

Site Playbook

How do you go from a pain statement to a live page that measures its own performance?

Five stages. Each has a gate. Pass the gate or go back.

Overview

FieldValue
PurposeBuild instrumented conversion pages from validated pain
TriggerNew venture, new ICP, new landing page needed
FrequencyPer page (typically quarterly)
Duration4-8 hours for first ship
OwnerFounder (judgment) + AI (execution)
OutputLive page with analytics wired before launch

Agents

AgentStageJob
Human (founder)AllPositioning, judgment, taste, design direction
AI (research)1-2Research, positioning, narrative copy
AI (code generation)3Page generation, component building
Positioning pipeline1Ogilvy research + Sutherland psychology
Landing page builder3Demand-side questions + design thresholds
Rendering verifier3Post-build visual verification

Tools

ToolStageJob
AI assistant1-2Research, positioning, narrative copy
Code generator3Page generation, component building
Product analytics4-5Event tracking, session replay, feature flags
Web analytics4-5Privacy-first traffic measurement
Hosting platform3-5Deploy, preview, production

Process

Validate → Story → Design → Instrument → Learn
↓ ↓ ↓ ↓ ↓
positioning narrative page analytics weekly
pipeline arc map build wiring PDCA

Each stage inherits context from the previous. Skip a stage and the page ships without foundation.

Stage 1: Validate

Run a positioning pipeline. Six steps produce a creative brief with nine fields:

FieldWhat It Answers
ProblemWhat pain exists?
InsightWhat truth have you earned?
One ThingThe single benefit (not feature)
ProofWhat convinces a skeptic?
FrameCategory, reframe, comparison
ActionWhat do they do next?
FeelingWhat emotion do they leave with?
PositionWhere you sit in their mind
LoopWhich VVFL station this serves

Gate: Can you compress the nine fields into a creative brief that fits on one index card? If not, your positioning lacks clarity. Go back.

Stage 2: Story

Input: Creative brief from Stage 1

Map the Tight Five to page sections. Every section earns its place by serving one of the five priorities.

Tight FivePage SectionJob
PurposeHero + EnemyWhy this exists, what you fight
PrinciplesPhilosophyWhat truths guide the approach
PlatformHow It WorksThe mechanism diagram
PerspectiveSocial ProofWhat you see that others miss
PerformanceCTAThe metric commitment

Narrative Arc

The homepage follows a specific emotional beat sequence. Your page should too.

BeatEmotional IntentDesign Expression
HeroConfidence + urgencyHigh contrast, large type, single CTA
EnemyTension + recognitionDarker palette, sharp edges, pain language
SolutionRelief + clarityLighter palette, whitespace, mechanism diagram
ProofTrust + validationLogos, testimonials, specific numbers
CTAMomentum + actionAccent color, generous padding, action verb

Gate: Read the section headings aloud. Does a stranger know what the page offers, who it serves, and what to do? If not, rewrite the headings.

Stage 3: Design

Input: Narrative arc from Stage 2 + creative brief from Stage 1

Answer Singer's six questions before touching code:

  1. When does this need arise?
  2. What progress do they want?
  3. What do they use now instead?
  4. What's the hidden objection?
  5. What category are we compared to?
  6. What's the headline promise?

Follow the landing page bible at /docs/software/products/design/design-system/uiux-landing-page.md. The measurable thresholds:

CheckThreshold
Contrast4.5:1 minimum
Touch targets44x44px minimum
CTAAbove fold, unique color, 44px+ height
LCPUnder 2.5s
Value propVisible without scrolling at 768px
Five-second test5 people correctly identify what, who, action

Post-build: Run a rendering verification. "Present in the DOM" is not "visible to a human."

Gate: All six thresholds pass with cited evidence. Screenshot proof for the five-second test.

Stage 4: Instrument

Wire three metrics before launch. Not after.

MetricWhat It MeasuresToolEvent
AttentionDo they stay?PlausibleScroll depth at 25/50/75/100%
BeliefDo they engage?PostHogSection visibility, proof clicks
ConversionDo they act?EitherForm submit, waitlist join, CTA click

PostHog event schema — define events before launch so Week 1 data is clean:

page_view         → {page, source, utm_*}
section_visible → {section_id, time_on_page}
cta_click → {cta_id, section, variant}
form_start → {form_id}
form_submit → {form_id, fields_count}
scroll_depth → {depth_percent, time_on_page}

Plausible goals:

Goal 1: CTA click (CSS selector: .cta-primary)
Goal 2: Form submit (page: /thank-you or custom event)
Goal 3: Scroll 75% (custom event via JS)

Gate: Open the page in an incognito window. Click the CTA. Check the analytics dashboard. If the event does not appear within 30 seconds, the instrumentation is broken.

Stage 5: Ship and Learn

Weekly PDCA

DayAction
MondayReview 10-15 session replays. Form one hypothesis.
TuesdayImplement one atomic change.
Wednesday-SundayObserve ~150 visitors.
Next MondayDirectional signal: keep, revert, or iterate.

One change per week. Not three. Atomic means isolated — you know exactly what caused the signal change.

Monthly

Run a five-second test with 5 people. If fewer than 4 correctly identify what, who, and action, the page has drifted.

Quarterly

Full audit against the marketing protocols matrix. Score every protocol dimension. Compare to last quarter.

Prompts

Five prompts. Each inherits context from the previous. The output of each becomes the input of the next.

Pain Statement → Creative Brief → Narrative Arc → Page Spec → Event Schema → PDCA Log

Prompt 1: Extract

Role: Positioning strategist (Ogilvy research + Sutherland psychology)

Input: A pain statement, product description, and any available proof points.

Constraint: Output must fit the nine-field creative brief format. No field left empty. "One Thing" must be a benefit, not a feature.

Given this pain statement and product:

Pain: [paste pain statement]
Product: [paste product description]
Proof: [paste any evidence — testimonials, metrics, usage data]

Generate a creative brief with exactly these nine fields:

1. Problem — The pain in one sentence
2. Insight — The earned truth behind it
3. One Thing — Single benefit (not feature)
4. Proof — What convinces a skeptic
5. Frame — Category + reframe + comparison
6. Action — What they do next
7. Feeling — The emotion they leave with
8. Position — Where you sit in their mind
9. Loop — Which feedback loop type this serves
(runaway = extractive | corrective = controlling | virtuous = compounding)

Rules:
- "One Thing" must pass: "This helps me ___" not "This has ___"
- "Proof" must be verifiable, not aspirational
- "Frame" must name what you're compared to and why you're different

Output format: JSON with nine fields. Each value is one sentence.

Quality check: Read the "One Thing" aloud. If it sounds like a feature list item, rewrite as the outcome the user gets.

Prompt 2: Story

Role: Narrative architect (Tight Five mapping + emotional beat design)

Input: Creative brief JSON from Prompt 1.

Given this creative brief:

[paste JSON from Prompt 1]

Map it to a page narrative using the Tight Five framework:

| Tight Five | Page Section | Section Job |
|------------|--------------|-------------|
| Purpose | Hero + Enemy | Why this exists, what you fight |
| Principles | Philosophy | What truths guide the approach |
| Platform | How It Works | The mechanism diagram |
| Perspective| Social Proof | What you see others miss |
| Performance| CTA | The metric commitment |

For each section, generate:
1. Section headline (6 words max)
2. Subhead (15-25 words — explains the how)
3. Body copy (3-5 sentences — earns its place or gets cut)
4. Emotional beat (confidence / tension / relief / trust / momentum)
5. CTA text if applicable (action verb + benefit)

Rules:
- Hero headline must answer: What is it? Who is it for? What do I do?
- Enemy section names a specific enemy, not a vague problem
- How It Works must be diagrammable — if you can't draw it, rewrite it
- CTA copy uses action verb + benefit ("Get the framework" not "Submit")
- No section exceeds 100 words of body copy

Output format: Markdown table with five sections, each with the five sub-fields.

Quality check: Read just the headlines in order. Does a stranger understand the offer in 10 seconds?

Prompt 3: Build

Role: Component architect (React + design system discipline)

Input: Narrative arc from Prompt 2 + design preset choice.

Given this narrative arc:

[paste table from Prompt 2]

Design preset: [cinematic | dense | minimal]

Generate a page specification:

For each section, define:
1. Component name (PascalCase)
2. Layout pattern (hero-centered | split-screen | full-bleed | card-grid)
3. Required elements (headline, subhead, body, image/diagram, CTA button)
4. Design tokens to apply:
- Background: bg-ink | bg-chalk | bg-accent
- Text: text-chalk | text-ink | text-muted
- Spacing: section padding, element gaps
5. Responsive behavior at 375px mobile viewport

Rules:
- One CTA per viewport. CTA color appears on zero non-clickable elements.
- Every image must have alt text that describes the content, not the file name
- Touch targets: 44x44px minimum
- Hero must answer what/who/action without scrolling at 768px
- Diagram sections must degrade gracefully to a numbered list on mobile

Output format: Structured specification per component — not code.

Quality check: For each section, answer: "If I remove this section, does the page still convert?" If yes, the section is not earning its place.

Prompt 4: Wire

Role: Analytics engineer (PostHog + Plausible instrumentation)

Input: Page specification from Prompt 3.

Given this page specification:

[paste specification from Prompt 3]

Generate an instrumentation plan with three metric types:

1. ATTENTION metrics (do they stay?)
- Scroll depth events at 25%, 50%, 75%, 100%
- Time on page
- Section visibility (which sections enter viewport)

2. BELIEF metrics (do they engage?)
- Clicks on proof elements (testimonials, diagrams, expandable sections)
- Hover/interaction on mechanism diagrams
- Secondary page navigation (clicked through to learn more)

3. CONVERSION metrics (do they act?)
- Primary CTA clicks
- Form starts vs form completions
- Thank-you page views

For each event, specify:
- Event name (snake_case)
- Properties object (key: value pairs)
- Trigger condition (when does this fire?)

Also generate:
- Three Plausible goals with CSS selectors or page paths
- One primary hypothesis about which section drives conversion
- The minimum visitor count needed for a directional signal (~150)

Output format: PostHog event schema as a table + Plausible goals as a list + hypothesis as one sentence.

Quality check: Open the page. Click every interactive element. Does each click map to exactly one event? Missing events = blind spots.

Prompt 5: Learn

Role: Growth analyst (PDCA cycle design)

Input: Instrumentation plan from Prompt 4 + first week of analytics data.

Given this instrumentation plan and data:

[paste schema from Prompt 4]
[paste first week's data summary — or describe expected patterns]

Generate a PDCA learning cycle:

PLAN:
- State one hypothesis about what's working or broken
- Name the specific metric that tests this hypothesis
- Define what "signal" looks like (threshold or direction)

DO:
- Propose one atomic change (single variable)
- Specify what stays constant (control)
- Estimate required sample size for directional signal

CHECK:
- Define the observation period (days)
- State what "keep" vs "revert" looks like
- Identify confounding variables to watch

ACT:
- If keep: what's the next hypothesis?
- If revert: what alternative hypothesis does the failure suggest?
- Update the narrative arc if the data contradicts Stage 2 assumptions

Rules:
- One change per cycle. Not three.
- "Atomic" means you can attribute any signal change to exactly this change
- If sample size is under 150, extend the observation period — don't add variables
- Never run two tests simultaneously on the same page

Output format: Four-section PDCA document with specific numbers, not vague directional language.

Quality check: Can you explain the hypothesis to a non-technical person in one sentence? If not, the hypothesis is too complex for an atomic test.

Automation Mapping

PromptAutomation LevelWhat It Adds
1AI-assistedFull positioning pipeline, not just the brief
2ManualNarrative mapping requires earned conviction
3AI-assistedDemand-side questions + design thresholds
4ManualAnalytics wiring is engineering work
5ManualPDCA requires human judgment on data

Solo Founder Path

  1. Run Prompt 1 with an AI assistant. Save the JSON.
  2. Feed JSON into Prompt 2. Save the narrative table.
  3. Feed narrative + preset choice into Prompt 3. Hand specification to a code generator.
  4. After the page exists, run Prompt 4. Wire events before launch.
  5. After Week 1, run Prompt 5. Begin the PDCA cycle.

Time to first ship: 4-8 hours for a simple conversion page.

Artifacts

StageArtifactDemand Served
1Creative brief (9 fields)Trust (positioning clarity)
2Narrative arc + section copyTrust + Conversion
3Instrumented conversion pageConversion
4PostHog event schema + Plausible goalsMeasurement
5PDCA log + hypothesisImprovement

Outcomes

MetricTargetMeasurement
Five-second test4/5 correct5 people, pre-launch
Scroll depth 75%>40% of visitorsPlausible, weekly
CTA click rate>2%PostHog, weekly
Time to first ship4-8 hoursClock

Failure Modes

FailureSignalFix
Skip Stage 1Page ships without positioning clarityGo back. Run the positioning pipeline
No instrumentationNo data after Week 1Wire events before launch, never after
Multiple changes per weekCan't attribute signal to causeOne atomic change per PDCA cycle
Narrative driftFive-second test fails monthlyRe-run Stage 2 from original creative brief
Vanity metricsHigh traffic, zero conversionMeasure belief + conversion, not just attention

Context

Questions

How do you know when a page has earned the right to exist — and when it should be killed?

  • At what visitor count does weekly PDCA produce reliable signals, and what do you do before that threshold?
  • Which of the three metrics (attention, belief, conversion) is the leading indicator for your specific audience?
  • If Prompt 1's creative brief is wrong, how far down the chain does the error propagate before it becomes visible?
  • When does a prompt chain become rigid enough to scale but flexible enough to handle a genuinely novel product?