AI Onboarding Journey
AI doesn’t transform a business. Testing AI against real workflows does. Five stages. Seven documents. One working automation.
Each stage closes one loop, collapses one false belief, and produces a deliverable that earns the next conversation.
Stage 1
“AI is for tech companies, not us”
What’s the one task across your business that everyone hates and nobody has fixed?
Most businesses test AI on the wrong task — something simple, low-stakes, and already half-solved. The task that matters is the one that’s been quietly accepted as a cost of doing business: the proposal that takes six hours, the follow-up that falls through the cracks, the report nobody reads but everyone writes.
Type here — your words will appear in the Executive Brief preview below.
Executive Brief — Preview
Type the workflow above — your Executive Brief preview will appear here with your words in it.
Executive BriefFree. Delivered in Discovery. CEO-facing.
Stage 2
“Our workflows are too specific / complex for AI”
Can you describe the workflow end-to-end in five minutes — inputs, steps, outputs, who touches it?
Every workflow has inputs, steps, outputs, and judgment calls. The judgment calls stay human. Everything else is pattern. Most teams can’t see the structure in their own operations because they’re inside it. Classification makes it visible.
“Clean data first” is an artifact constraint masquerading as a real one. RPA required it. LLMs don’t. The actual barrier is scattered data: business logic spread across five platforms, senior staff spending 45 minutes assembling context before every call. One onboarding rebuilt from 4.5 hours to 5 minutes — not by cleaning data, but by mapping where it moved.
Classify your workflows: Real, Artifact, or Hybrid?
Artifact constraints are the highest-leverage AI targets — they feel fixed but can move.
Cannot change — law, regulation, physical limit, contracted deadline
Feels real but is policy, habit, or belief — chosen once and forgotten
Partly real, partly artifact — the real part stays, the artifact part can move
First Principles PlaybookConstraint MapFree worksheets. Fill before the meeting.
Stage 3
“We tried it — results were too generic”
What would you tell a new contractor to get good work on day one?
Generic output is a symptom of a generic brief — not a limitation of AI. Brief it the same way you’d brief a new contractor: context, constraints, examples of good output. The comparison tells you everything.
What does the week look like right now?
Map it. Where does time go that shouldn't?
Follow one piece of work from start to finish — how many platforms does it touch before it's done?
Count the hops. Each hand-off is a flow problem, not a data quality problem.
What does your best week look like?
Not the fantasy — the realistic best. What made it work?
Which workflow, if it ran automatically, would free the most time or close the most revenue?
Name one.
What would have to be true for an AI system to do that job — and what would stop it?
Obstacles first.
Executive Brief — Preview
Type the workflow above — your Executive Brief preview will appear here with your words in it.
ROI & Capital Memo
The Discovery session produces a 1–2 page decision instrument: the selected workflow, time-cost baseline, automation estimate, and break-even timeline. Output: Go / Conditional / No-Go. The condition makes the decision reviewable — not a gut call.
ROI & Capital MemoCo-produced in the Discovery session with your numbers.
Stage 4
“AI adds overhead we don't have time for”
Which part of your week repeats with the same inputs every time?
The overhead disappears once the context is built into the system. Second run faster. No new thinking required. That’s the proof — not the feeling.
Systems Blueprint
—Who owns the AI setpoint for each workflow
—How quality is measured and by whom
—What triggers a human review
—What the rollback procedure is
Systems BlueprintChange GuideDelivered in the AI Sprint.
Stage 5
“AI will displace our people”
What decisions in this business only your leadership team can make?
The person who defines what “good” looks like is irreplaceable. AI runs the loop. The business owns the setpoint. That is the transformation: from executing work to setting the standard that execution runs toward.
The Moat Question
Who owns the taste model? When AI handles the doing, the irreplaceable asset is knowing what “done well” means in this specific domain — in this market, with these customers, at this standard. That judgment — accumulated through real decisions — is the moat.
Review your journey
By 2030, the businesses that survive are the ones that learned to direct intelligence — not the ones that bought the most software.
The Model
Give the real thing away. Keep nothing important behind a gate. Let the quality of the work be its own proof.
Put the work in the window
Mental models, meeting protocols, diagnostic templates on /docs. If a competitor can copy it, the moat was never the content.
Let them try before you ask
A business that has run its own Constraint Map already knows what good looks like. They don't need convincing — they need the rest.
Trade reciprocity, not pressure
The model works only when the free asset stands alone as useful even if they never engage further.
Trojan Horse
Open
Pack delivered:
AI Discovery
30–60 min
Pack delivered:
AI Sprint
2–4 weeks
Pack delivered:
AI Operations
Ongoing
Pack delivered:
The Play
No chase, no pitch
If the ideas are useful, people transform value, the ecosystem grows, goodwill spreads.
Funnel → fishball
A fishball is a self-reinforcing community that circles value back. The five counterparties feed each other.
Meaning over money
When value can be measured and exchanged directly, the real question becomes: what does the business actually want to be doing, and who with?
The movement we need is on your shoulders
Context
Questions
What is the single workflow that, if improved first, creates the most visible difference in the business week — and how do you find it before the client can name it?