Skip to main content

Case Study

The hero returns and teaches the next person.

The transformation is done. Results are real and measured. Now the most valuable thing you can do is write it down — not as a sales document, but as a map. Someone else, somewhere, is standing at the beginning of the same journey you just completed. This template is what you leave them.

A good case study doesn't summarise. It teaches. It names the constraint that wasn't obvious, shows the decision tree that made the system work, proves the result in numbers, and hands the reader a replication guide they can actually follow. If someone reads this and can see themselves three months ahead without needing to call anyone — it worked.


Anonymization Rules

Before writing, apply these once:

  • No client name, country, industry sub-niche, or personally identifying operational details in sections 1–9
  • Industry + business model only (e.g. "D2C e-commerce, 25 product lines, returns-heavy" — not the brand)
  • All KPI figures should be real — precision is the proof; vague ranges ("significant reduction") are not case study evidence
  • Client approves before any external use
  • Internal sections 10–11 are never shared externally

1. Persuasive Point

One sentence. The result, stated plainly. No adjectives. No hedging.

[e.g. "A D2C brand with 40,000 monthly support tickets reduced per-ticket cost by 87% and recovered $200K/month in revenue that was walking out the door via refunds."]


2. Situation

Before state. Give the reader enough to recognize themselves.

DimensionBefore
Industry / business model[e.g. D2C e-commerce, 25 product lines]
Volume[e.g. 40,000 support tickets/month]
Team size (relevant function)[e.g. 12-person offshore support team]
Cost structure[e.g. $X per ticket fully loaded]
The constraint they named[e.g. "We need to hire more people"]
The constraint that was actually binding[e.g. "Refund-handling logic was undocumented — every agent improvised"]

3. Constraint Map

For each workflow changed, classify it. This is the teaching section — it shows the reader how to look at their own business.

WorkflowClassificationJudgment LayerArtifact Layer
[e.g. Refund triage]HybridAgent decides: legitimate vs pattern abuseTicket categorization, template selection
[e.g. Escalation routing]ArtifactRules-based: product type × complaint category
[e.g. Executive exceptions]RealHuman judgment: relationship risk

The key distinction this case surfaced:

[One sentence: what looked like a Real constraint turned out to be Artifact — and why. e.g. "The team believed refund decisions required experienced agents. They did — but only 8% of tickets. The other 92% followed a decision tree that had never been written down."]


4. Performance

Before and after. Every row needs both values. If you don't have the after number, don't include the row.

MetricBeforeAfterDelta
Cost per ticket
Refund rate
Revenue recovered / month
Tickets handled without human
Sales-save conversion rate
[Your key metric]

Note on attribution: State how results were measured and over what period. "Month 3 vs month 1" is a claim. "Six-month average vs prior six-month average" is evidence.


5. Flywheel

The first unlock freed something. That something freed something else. Map the chain.

[Primary unlock]
↓ freed
[What that freed]
↓ which enabled
[What that enabled]
↓ which produced
[Compounding outcome]

Example:

Refund handling encoded in AI
↓ freed
Senior agents from 80% of ticket volume
↓ which enabled
Senior agents to focus exclusively on sales-save calls
↓ which produced
70% lift in sales-save conversion rate

Why the flywheel matters: The first module rarely produces the headline number. The headline comes from what it unlocks downstream. If you only measure the primary unlock, you undercount by 2–3×.


6. Principles

3–5 first principles that made this work. Not tactics — structural reasons. Someone in a different industry should be able to test these against their own situation.

PrincipleWhat it means in practice
[e.g. The decision tree is the moat, not the AI model][e.g. The AI encoded their proven refund logic. A generic tool couldn't — the edge was in the deviation from the median.]
[e.g. Encode proven logic, not hoped-for logic][e.g. The team ran the decision tree manually for 30 days before encoding. The AI amplified what was already working.]
[e.g. The constraint that looks hardest is often not the binding one][e.g. The binding constraint was documentation absence, not ticket volume.]

7. Inversion

What do most people get wrong about this type of transformation? The non-obvious insight — the thing that would have sent this sideways if missed.

[e.g. "Most businesses try to automate the workflow before documenting the decision tree. They encode improvisation at scale. The logic document is not prep — it is the product. The AI is the delivery mechanism."]

Additional inversions (optional):

  • [e.g. "The ROI is not in cost reduction. The ROI is in what you do with the capacity you freed."]
  • [e.g. "Off-the-shelf AI tools cannot encode proprietary deviation. That's where the competitive edge lives."]

8. Validated Value

Primary outcome and downstream impact. Numbers only — no adjectives.

Primary outcome:

[One sentence. Measurable. e.g. "Refund rate dropped 5 percentage points over 90 days."]

Downstream impact:

[One sentence. Measurable. e.g. "Senior agent time freed from ticket handling → redeployed to sales-save → $200K/month in recovered revenue within 90 days."]

Compounding effect:

[One sentence on what this unlocked for the next cycle. e.g. "Data from AI-handled tickets revealed three product lines generating 60% of refund volume — now flagged for product review."]


9. Replication Guide

What does a similar business need before starting this? Not a pitch — a map. If someone reads this and starts the wrong way, this section failed.

Minimum prerequisites:

RequirementWhy it mattersWhat "good enough" looks like
Documented decision treeAI encodes proven logic — not assumed logicA senior operator can describe every rule in the workflow from memory with named exceptions
Volume data (12 months)ROI model needs a real baselineTicket counts, cost per unit, refund rate — even rough numbers beat nothing
Human validation periodSmall-scale human execution before encoding30-day manual run with documented inputs, outputs, and KPIs
Authority to change the workflowTransformation requires process ownershipNamed person with mandate to eliminate steps, not just optimize them

Sequencing advice:

[One paragraph: what to do first, and why. e.g. "Start with the constraint map. Run the tripling test on every workflow before writing a single line of logic. The business that jumps straight to automation without classifying constraints usually automates the wrong thing first — and the wrong first module wastes 3–4 months."]

Warning — what to avoid:

[One sentence on the failure mode to sidestep. e.g. "Don't encode a workflow your senior team has never actually executed consistently. Inconsistent logic amplified is expensive chaos."]


10. Applications (Internal — Not Shared)

How this case study applies to current work, prospects, or open constraints.

ApplicationRelevanceAction

11. Open Questions (Internal — Not Shared)

3–5 questions this case raises for future investigation or research.


Context

Questions

What does a good case study teach that a sales deck cannot?

  • If the decision tree is the moat, what does it mean to share it publicly in a case study?
  • At what point does a case study become more valuable than a proposal — and why?
  • What's the signal that a replication guide is good enough? Can someone follow it without calling you?