Analytics and Tracking
What is the minimum instrumentation that makes a feedback loop readable — and what does it cost to not have it?
Why Measure
A page that converts well and a page that looks like it converts well are identical until you measure. Analytics is the gauge that makes the loop real. Without it, every copy change is an opinion.
The instrumentation stack has three layers:
| Layer | Measures | Changes when |
|---|---|---|
| Pageview | Traffic volume and source | Baseline — rarely |
| Event | Named user actions (click, scroll, expand) | When you add a new conversion goal |
| Session replay | Where users stop, hesitate, or leave | When events show a drop-off but not the cause |
Event-First Architecture
Session-first analytics tools (built around page sessions) are poor instruments for conversion work. An event-first tool fires a named signal each time something meaningful happens — CTA click, scroll past a section, FAQ opened — and those signals compose into the conversion picture you need.
The events to wire before measuring any conversion:
| Event | What it proves | Threshold to watch |
|---|---|---|
| Primary CTA click | Attention converted to intent | ≥ 5% of pageviews |
| Scroll to 50% | Framework section reached | ≥ 30% of pageviews |
| Return visit within 7 days | Compounding attention | ≥ 15% of unique visitors |
| FAQ expanded | Objection surfaced | ≥ 10% of pageviews |
Do not add events speculatively. Each event is a question you are actively trying to answer. If you do not have a hypothesis for what the number should be, do not wire the event.
Conversion Benchmarks
Conversion rates vary significantly by page type and traffic source. Use these as orientation only — your baseline is the only number that matters for measuring improvement.
| Page type | Good band | Great band |
|---|---|---|
| Landing page | 2–5% | 5–10% |
| Signup or trial form | 20–30% | 30–50% |
| Pricing page | 5–10% | 10–20% |
| Blog to email capture | 1–3% | 3–5% |
A flat rate within the Good band is a kill signal if the trend is flat for more than 4 weeks. Stable mediocrity is not stability — it is drift without correction.
Technical Gauges
Conversion is bounded by technical performance. A page that loads in 4 seconds cannot convert at the same rate as one that loads in 1.5 seconds, regardless of copy quality.
Minimum thresholds before considering copy improvements:
| Metric | Threshold | Why |
|---|---|---|
| LCP (largest contentful paint) | < 2.5s | Above this, bounce rate climbs sharply |
| CLS (cumulative layout shift) | < 0.1 | Layout jumps destroy trust signals |
| INP (interaction to next paint) | < 200ms | Slow response to clicks signals broken UI |
| Performance score | ≥ 90 | Lighthouse composite — below this, technical debt is the conversion problem |
| SEO score | ≥ 95 | Below this, organic traffic is throttled before measurement starts |
Run technical gauges against the deployed URL, not localhost. Scores differ by 20–40 points in some cases.
Privacy and Consent
Instrumentation touches visitor data. Two rules that do not bend:
- Session replay must mask inputs. Keyboard content is never recorded.
- Consent gates pageview tracking in jurisdictions where GDPR or equivalent applies. Measure unblocked traffic separately — consent-rate itself is a signal.
Context
- SEO — organic traffic is the pool analytics measures conversion against
- Landing Page — the page structure analytics instruments
- Performance — the outer loop that analytics feeds
Questions
What is the minimum set of events that would make the conversion loop readable — and which one, if it moved 50%, would tell you everything you need to know?
- At what point does adding more events cost more in engineering time and cognitive noise than the signal produces?
- How do you separate copy drift from audience drift when conversion rates decline over 12 weeks?
- Which anti-pattern in analytics implementation is most commonly introduced by developers who know enough to be dangerous: over-instrumentation, under-instrumentation, or event naming inconsistency?