Skip to main content

Content

Is the content pipeline producing work that lands?

The content pipeline defines six stages: ICP, Capture, Transform, Qualify, Question, Ship. Each stage has quality gates. This page tracks whether those gates are producing measurable outcomes. Without this page, the pipeline is a process. With it, Deming's loop runs.

The Loop

PIPELINE (define quality gates)  --> what does "good" look like?
|
BUILD (execute the 6 stages) --> write, score, qualify, ship
|
MEASURE (this page) --> did it land?
|
DIAGNOSE (compare vs expected) --> why or why not?
|
BETTER PIPELINE --> update gates, ICP, voice mix

Pipeline Health

Leading indicators. Predict content quality before publishing.

MetricThresholdMethodCadence
Pipeline completion100% of shipped articles through all 6 stagesCheck .agent-work/content-development/Per article
Transform depth4 voice drafts per articleCount files in 02-transform-value/Per article
Qualification scoreSTR + SUB + STY each 4+ out of 5Scoring matrix in 05-qualified-draft.mdPer article
Ogilvy test pass5/5 gates passedChecklist in qualify stagePer article
Sutherland test pass5/5 gates passedChecklist in qualify stagePer article
Cross-link density5+ links into /docs/ per articleCount inline links (not just Context section)Per article
Distribution assetLinkedIn post drafted before shipCheck 05-ship-package/Per article
MDX validationZero errorsmdx-validation skillPre-push

Stage Metrics

Each stage of published content maps to a measurable reader behavior.

StageWhat it measuresMetricDiagnosis if below target
HeadlineDid the headline select the right audience?CTR from distribution channelHeadline didn't intrigue or filter
OpeningDid the first 50 words hook?Scroll depth past fold (50%+ readers)Opening didn't establish relevance
BodyDid the content deliver on the promise?Time on page (3+ minutes for long-form)Content didn't match headline promise
CloseWas the next action obvious?CTA click-through rate (5%+)Action unclear or friction too high
ConversionDid the reader do the job?Goal completion (links followed into /docs/)Relevance, trust, or friction issue

Distribution

Track each channel separately. Same content performs differently by channel.

ChannelPrimary metricSecondary metricCadence
LinkedInEngagement rate (3%+ of impressions)Profile visits from postPer post
Direct (mm.dreamineering.com)Unique visitors per articleTime on pageWeekly
ReferralClick-through from partner sitesBounce rate under 50%Monthly

Content Portfolio

Track the full portfolio, not just individual articles.

MetricThresholdMethod
Articles shippedTrend: at least 1 per monthCount /meta/ files
OS integration100% of articles have 5P table + series navSpot check
Cross-referencingEvery article links to 3+ other articlesLink audit
Evergreen ratio80%+ of traffic to articles older than 30 daysAnalytics time filter
Top performerIdentify which article drives most /docs/ trafficAnalytics funnel

Human/AI Balance

The pipeline tracks human vs AI contribution. Monitor the ratio to ensure human judgment stays on high-leverage decisions.

ActivityCurrent AI %Target rangeRed flag
ICP investigation40%30-50%AI defining psychology without human validation
Voice transform70%60-80%Human not selecting voices or reviewing drafts
Scoring and qualify50%40-60%AI scoring without human calibration
Ship decision35%20-40%AI deciding what ships without human gate
Distribution65%50-70%Posts going out without human tone check

The rule: AI percentage can rise on execution tasks (formatting, validation, drafting). Human percentage must stay high on judgment tasks (what's worth saying, voice calibration, ship decisions).

Monitoring Cadence

FrequencyWhat to check
Per articlePipeline completion, qualification scores, distribution assets
WeeklyPage views, time on page, LinkedIn engagement
MonthlyConversion funnels, cross-link click-through, content velocity
QuarterlyFull portfolio review, top/bottom performers, ICP validation
AnnuallyPipeline evolution, human/AI balance trends, voice effectiveness

Feedback Path

When metrics reveal patterns, update the pipeline:

PatternAction
Headlines consistently low CTRStrengthen Ogilvy test in QUALIFY stage
High bounce on body contentTRANSFORM stage producing style over substance
Low cross-link click-throughLinks not inline enough, buried in Context section
One voice consistently scores highestWeight that voice more in merge, or investigate why others underperform
Distribution underperformingRevisit ICP -- may be targeting wrong audience
AI percentage creeping up on judgment tasksRecalibrate human/AI split, add manual gates

Learning Template

After every shipped article, record:

Article: [title]
Expected: [hypothesis -- what we thought would happen]
Actual: [metrics at 7 days and 30 days]
Gap: [difference and diagnosis]
Worked: [specific element that drove specific result]
Didn't: [specific element that underperformed and why]
Change: [what we'll do differently next time]
Insight: [one reusable sentence]

This template lives in .agent-work/content-development/05-ship-package/{article}/review.md. The insights compound into better ICPs and stronger qualify gates.

Context