AI Review Template
Are you making the most of AI? How do you know the truth?
Quarterly review to audit what AI is doing for your business, what it isn't, and what to change next quarter. Work through each section — the gaps are the strategy.
Review date: [YYYY-MM-DD] Reviewer: [Name / Role] Period under review: [Q_ 20__]
0. Business Alignment
Before auditing tools, anchor to what matters. AI without business objectives is technology tourism.
| Question | Answer |
|---|---|
| What 2-3 business metrics should AI move this quarter? | [revenue, cost, speed, quality — be specific] |
| Which processes are in scope for the next 90 days? | [name them] |
| Who sponsors this review? Who blocks? Who operates? | [names and roles] |
| What constraints are non-negotiable? (data, regulations, brand) | [list] |
| What proves this was worth the time within one quarter? | [measurable outcome] |
If you cannot answer these five questions, the rest of this review will produce a list without a purpose.
1. Current State Audit
What AI touches today
Map every place AI currently operates in your business. Be exhaustive — include the intern using ChatGPT for email drafts.
| Function | Tool / Model | Task | Hours Saved / Week | Quality (1-10) | Owner |
|---|---|---|---|---|---|
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
What AI should touch but doesn't
| Function | Task Currently Done Manually | Hours Spent / Week | Why Not Automated Yet | Priority |
|---|---|---|---|---|
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
What AI touches but shouldn't
Not everything benefits from automation. Where has AI created more problems than it solved?
- [Task / tool that isn't working — why, and what to revert to]
- [Task where human judgment is non-negotiable]
2. Cost and ROI
Current AI spend
| Tool / Service | Monthly Cost | Annual Cost | Primary Use | Seats / Users |
|---|---|---|---|---|
| [tool] | $[X] | $[X] | [use] | [count] |
| [tool] | $[X] | $[X] | [use] | [count] |
| Total | $[X] | $[X] |
ROI assessment
| Investment | Value Created | Evidence | Conviction | Verdict |
|---|---|---|---|---|
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | HIGH / MEDIUM / LOW / NONE | Keep / Scale / Cut |
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | HIGH / MEDIUM / LOW / NONE | Keep / Scale / Cut |
Tag each conviction level: HIGH = measured data, MEDIUM = reasonable estimate, LOW = assumption, NONE = guess.
Net ROI this quarter: Positive / Negative / Unclear Confidence: HIGH / MEDIUM / LOW / NONE — [why]
3. Capability Gap Analysis
Skills matrix
| Capability | Current Level | Required Level | Gap | Conviction | Action |
|---|---|---|---|---|---|
| Prompt engineering | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [training, hire, outsource] |
| AI tool selection | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| Data preparation | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| Workflow automation | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| AI governance | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
Tribal balance
AI initiatives get stuck when teams are unbalanced. Three tribes need to work together:
| Tribe | Question They Ask | Your Team | Balance |
|---|---|---|---|
| Explorers — "What if we tried...?" | Who discovers options and watches the frontier? | [names] | Strong / Weak / Missing |
| Automators — "How do we operationalize?" | Who scales validated ideas and integrates systems? | [names] | Strong / Weak / Missing |
| Validators — "How do we ensure quality?" | Who sets standards, compliance, and trust? | [names] | Strong / Weak / Missing |
The failure pattern: Explorers generate pilots. Automators can't operationalize. Validators block everything. Pilot purgatory.
If one tribe is missing, that's your capability gap — not a tool problem.
Data readiness
- Do we have clean, structured data for our highest-priority AI use case?
- Is our data accessible (not locked in silos or legacy systems)?
- Do we have a data governance policy (who owns what, retention, privacy)?
- Are we collecting data we'll need in 6 months but don't use yet?
4. Risk and Governance
Current controls
- Usage policy — Written policy on acceptable AI use exists and is distributed
- Data privacy — No customer PII sent to AI tools without consent / anonymisation
- Output review — Human reviews AI-generated content before it reaches customers
- Vendor risk — AI vendor contracts reviewed for data usage, IP, and liability
- Bias checks — AI outputs tested for systematic errors or discrimination
- Incident log — Record of AI failures, near-misses, and customer complaints
Regulatory exposure
| Regulation / Standard | Applies? | Current Compliance | Action Required |
|---|---|---|---|
| Privacy Act / GDPR | Yes / No | Compliant / Gap | [action] |
| Industry-specific | Yes / No | Compliant / Gap | [action] |
| AI-specific (EU AI Act, etc.) | Yes / No | Compliant / Gap | [action] |
| Internal standards | Yes / No | Compliant / Gap | [action] |
5. Competitive Position
Where competitors use AI
| Competitor | Known AI Usage | Advantage It Gives Them | Our Response |
|---|---|---|---|
| [name] | [what they do with AI] | [impact] | [match / leapfrog / ignore] |
| [name] | [what they do with AI] | [impact] | [match / leapfrog / ignore] |
Displacement risk
Which revenue streams could an AI-native competitor undercut? Where could they win on speed? What would a new entrant with zero legacy build differently?
| Revenue Stream at Risk | How AI-Native Competitor Undercuts | Timeline | Our Response |
|---|---|---|---|
| [stream] | [mechanism] | [months] | [match / leapfrog / ignore] |
| [stream] | [mechanism] | [months] | [match / leapfrog / ignore] |
| [stream] | [mechanism] | [months] | [match / leapfrog / ignore] |
6. Use Case Scoring
You have gaps from sections 1-5. Score each candidate before committing resources. Most AI initiatives die not from bad technology but from picking the wrong problem.
| Use Case | Business Impact (1-5) | Feasibility (1-5) | Time to Value (1-5) | Adoption Likelihood (1-5) | Composite |
|---|---|---|---|---|---|
| [from "should touch" list] | [revenue/cost/risk impact] | [data + complexity] | [result in 4-12 weeks?] | [owner + team motivation] | [product] |
| [from "should touch" list] | [impact] | [feasibility] | [speed] | [adoption] | [product] |
| [from "should touch" list] | [impact] | [feasibility] | [speed] | [adoption] | [product] |
Scoring guide:
- Business Impact — 5: moves a top-line metric measurably. 1: nice to have, no metric attached
- Feasibility — 5: clean data exists, tools available, dependencies minimal. 1: data locked in legacy, custom build required
- Time to Value — 5: result in 4 weeks. 3: result in 12 weeks. 1: result in 6+ months
- Adoption Likelihood — 5: team is asking for it. 1: team will resist or ignore
Top 3 by composite score become your next quarter initiatives.
7. Priorities
Top 3 AI initiatives
Rank by composite score from section 6. Each initiative needs an owner, a deadline, and a measurable outcome.
| # | Initiative | Owner | Deadline | Success Metric | Budget |
|---|---|---|---|---|---|
| 1 | [highest impact] | [name] | [date] | [measurable outcome] | $[X] |
| 2 | [second] | [name] | [date] | [measurable outcome] | $[X] |
| 3 | [third] | [name] | [date] | [measurable outcome] | $[X] |
What to stop
Priorities are about what you stop doing, not just what you start.
- [Tool to cancel / initiative to kill — why]
- [Process to revert to manual — why]
What to learn
| Topic | Who | Method | Conviction | By When |
|---|---|---|---|---|
| [skill gap from section 3] | [person/team] | [course, workshop, hire] | [H/M/L/N] | [date] |
| [emerging capability] | [person/team] | [method] | [H/M/L/N] | [date] |
8. Review Gate
Before closing this review, verify:
- Every AI tool in the business is listed in section 1 (including shadow IT)
- ROI verdict is backed by evidence, not assumption
- At least one "stop" item identified — if nothing to stop, look harder
- Next quarter initiatives have owners and deadlines, not just descriptions
- Governance controls checked against actual practice, not written policy
- This review is saved where the next reviewer can find and build on it
Overall AI maturity assessment:
| Level | Description | This Quarter |
|---|---|---|
| 1 — Ad hoc | Individual tools, no coordination | [ ] |
| 2 — Emerging | Some workflows automated, no strategy | [ ] |
| 3 — Defined | Strategy exists, initiatives tracked | [ ] |
| 4 — Managed | ROI measured, governance in place | [ ] |
| 5 — Optimised | AI embedded in operations, compounding | [ ] |
Target level by next review: [1-5]
Quarter-over-quarter
If this is not your first review, compare:
| Metric | Last Quarter | This Quarter | Trend |
|---|---|---|---|
| Maturity level | [1-5] | [1-5] | +/- / flat |
| Total AI spend | $[X] | $[X] | +/- / flat |
| Hours saved / week | [X] | [X] | +/- / flat |
| Governance controls | [X/6] | [X/6] | +/- / flat |
| "Should touch" items | [count] | [count] | shrinking? |
What changed since last quarter that you did not plan for?
9. What's Next
This review surfaced gaps. The gaps are the strategy.
Trust is built by consistency, transparency, and delivered promises. This template gave you all three — a consistent framework, transparent scoring, and a deliverable you can act on. That's the point. The value is in the completed review, not in what comes after.
Three paths forward:
- Self-serve — Use the priorities in section 7, revisit this template next quarter. The quarter-over-quarter comparison will show whether you're compounding or spinning
- Structured help — Bring this completed review to a strategy conversation. The filled template is the brief — anyone helping you should start here, not from scratch
- Full engagement — Hand this to a partner who builds the system with you. The diagnostic → pilot → managed service path exists for a reason: prove value small before committing large
The review you just completed is worth more than most strategy decks. It has your numbers, your tribal balance, your use case scores, your conviction levels. Proof earns trust. Trust earns loyalty. Loyalty compounds.
Context
- Decision Making — The decision process this template applies to AI investment
- Navigation — Value, belief, control — the three systems AI must serve
- Tech Stack Decisions — The engineering version: checklist for technology choices
- Infrastructure Economics — What to run where at each revenue stage
- Trust — The invisible currency: built slowly, worth everything, destroyed in seconds
- Marketing Principles — The berley principle: create conditions where fish want to be
- AI Business Consulting — The four-layer playbook this template implements
- Business Idea Checklist — Full venture validation template
- Process Quality Assurance — Deming's 14 points for continuous improvement
- AI Coding Tools — Engineering-specific AI integration
Questions
Is your AI strategy driven by what's possible or by what's painful?
- Which items in your "should touch but doesn't" list have been there for more than one quarter — and what does that reveal about your tribal balance?
- If you cut your AI budget by 50%, which tools would you keep — and does that match your stated priorities?
- What data are you generating today that becomes a competitive moat in 12 months?
- Which tribe is missing from your team — and is that why your pilots aren't shipping?