Content
Is the content pipeline producing work that lands?
The content pipeline defines six stages: ICP, Capture, Transform, Qualify, Question, Ship. Each stage has quality gates. This page tracks whether those gates are producing measurable outcomes. Without this page, the pipeline is a process. With it, Deming's loop runs.
The Loop
PIPELINE (define quality gates) --> what does "good" look like?
|
BUILD (execute the 6 stages) --> write, score, qualify, ship
|
MEASURE (this page) --> did it land?
|
DIAGNOSE (compare vs expected) --> why or why not?
|
BETTER PIPELINE --> update gates, ICP, voice mix
Pipeline Health
Leading indicators. Predict content quality before publishing.
| Metric | Threshold | Method | Cadence |
|---|---|---|---|
| Pipeline completion | 100% of shipped articles through all 6 stages | Check .agent-work/content-development/ | Per article |
| Transform depth | 4 voice drafts per article | Count files in 02-transform-value/ | Per article |
| Qualification score | STR + SUB + STY each 4+ out of 5 | Scoring matrix in 05-qualified-draft.md | Per article |
| Ogilvy test pass | 5/5 gates passed | Checklist in qualify stage | Per article |
| Sutherland test pass | 5/5 gates passed | Checklist in qualify stage | Per article |
| Cross-link density | 5+ links into /docs/ per article | Count inline links (not just Context section) | Per article |
| Distribution asset | LinkedIn post drafted before ship | Check 05-ship-package/ | Per article |
| MDX validation | Zero errors | mdx-validation skill | Pre-push |
Stage Metrics
Each stage of published content maps to a measurable reader behavior.
| Stage | What it measures | Metric | Diagnosis if below target |
|---|---|---|---|
| Headline | Did the headline select the right audience? | CTR from distribution channel | Headline didn't intrigue or filter |
| Opening | Did the first 50 words hook? | Scroll depth past fold (50%+ readers) | Opening didn't establish relevance |
| Body | Did the content deliver on the promise? | Time on page (3+ minutes for long-form) | Content didn't match headline promise |
| Close | Was the next action obvious? | CTA click-through rate (5%+) | Action unclear or friction too high |
| Conversion | Did the reader do the job? | Goal completion (links followed into /docs/) | Relevance, trust, or friction issue |
Distribution
Track each channel separately. Same content performs differently by channel.
| Channel | Primary metric | Secondary metric | Cadence |
|---|---|---|---|
| Engagement rate (3%+ of impressions) | Profile visits from post | Per post | |
| Direct (mm.dreamineering.com) | Unique visitors per article | Time on page | Weekly |
| Referral | Click-through from partner sites | Bounce rate under 50% | Monthly |
Content Portfolio
Track the full portfolio, not just individual articles.
| Metric | Threshold | Method |
|---|---|---|
| Articles shipped | Trend: at least 1 per month | Count /meta/ files |
| OS integration | 100% of articles have 5P table + series nav | Spot check |
| Cross-referencing | Every article links to 3+ other articles | Link audit |
| Evergreen ratio | 80%+ of traffic to articles older than 30 days | Analytics time filter |
| Top performer | Identify which article drives most /docs/ traffic | Analytics funnel |
Human/AI Balance
The pipeline tracks human vs AI contribution. Monitor the ratio to ensure human judgment stays on high-leverage decisions.
| Activity | Current AI % | Target range | Red flag |
|---|---|---|---|
| ICP investigation | 40% | 30-50% | AI defining psychology without human validation |
| Voice transform | 70% | 60-80% | Human not selecting voices or reviewing drafts |
| Scoring and qualify | 50% | 40-60% | AI scoring without human calibration |
| Ship decision | 35% | 20-40% | AI deciding what ships without human gate |
| Distribution | 65% | 50-70% | Posts going out without human tone check |
The rule: AI percentage can rise on execution tasks (formatting, validation, drafting). Human percentage must stay high on judgment tasks (what's worth saying, voice calibration, ship decisions).
Monitoring Cadence
| Frequency | What to check |
|---|---|
| Per article | Pipeline completion, qualification scores, distribution assets |
| Weekly | Page views, time on page, LinkedIn engagement |
| Monthly | Conversion funnels, cross-link click-through, content velocity |
| Quarterly | Full portfolio review, top/bottom performers, ICP validation |
| Annually | Pipeline evolution, human/AI balance trends, voice effectiveness |
Feedback Path
When metrics reveal patterns, update the pipeline:
| Pattern | Action |
|---|---|
| Headlines consistently low CTR | Strengthen Ogilvy test in QUALIFY stage |
| High bounce on body content | TRANSFORM stage producing style over substance |
| Low cross-link click-through | Links not inline enough, buried in Context section |
| One voice consistently scores highest | Weight that voice more in merge, or investigate why others underperform |
| Distribution underperforming | Revisit ICP -- may be targeting wrong audience |
| AI percentage creeping up on judgment tasks | Recalibrate human/AI split, add manual gates |
Learning Template
After every shipped article, record:
Article: [title]
Expected: [hypothesis -- what we thought would happen]
Actual: [metrics at 7 days and 30 days]
Gap: [difference and diagnosis]
Worked: [specific element that drove specific result]
Didn't: [specific element that underperformed and why]
Change: [what we'll do differently next time]
Insight: [one reusable sentence]
This template lives in .agent-work/content-development/05-ship-package/{article}/review.md. The insights compound into better ICPs and stronger qualify gates.
Context
- META-POSTS Pipeline -- The 6-stage content process being measured
- Standards -- Why measurable thresholds matter
- Product Design Metrics -- Parallel PDCA loop for design quality
- Software Development Metrics -- Parallel metrics for code
- Process Optimisation -- PDCA loop mechanics