AI Review Template
Are you making the most of AI? How do you know the truth?
Quarterly review to audit what AI is doing for your business, what it isn't, and what to change next quarter. Work through each section — the gaps are the strategy.
Review date: [YYYY-MM-DD] Reviewer: [Name / Role] Period under review: [Q_ 20__]
0. Business Alignment
Before auditing tools, anchor to what matters. AI without business objectives is technology tourism.
| Question | Answer |
|---|---|
| What 2-3 business metrics should AI move this quarter? | [revenue, cost, speed, quality — be specific] |
| Which processes are in scope for the next 90 days? | [name them] |
| Who sponsors this review? Who blocks? Who operates? | [names and roles] |
| What constraints are non-negotiable? (data, regulations, brand) | [list] |
| What proves this was worth the time within one quarter? | [measurable outcome] |
If you cannot answer these five questions, the rest of this review will produce a list without a purpose.
1. Current State Audit
What AI touches today
Map every place AI currently operates in your business. Be exhaustive — include the intern using ChatGPT for email drafts.
| Function | Tool / Model | Task | Hours Saved / Week | Quality (1-10) | Owner |
|---|---|---|---|---|---|
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
What AI should touch but doesn't
| Function | Task Currently Done Manually | Hours Spent / Week | Why Not Automated Yet | Priority |
|---|---|---|---|---|
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
What AI touches but shouldn't
Not everything benefits from automation. Where has AI created more problems than it solved?
- [Task / tool that isn't working — why, and what to revert to]
- [Task where human judgment is non-negotiable]
2. Cost and ROI
Current AI spend
| Tool / Service | Monthly Cost | Annual Cost | Primary Use | Seats / Users |
|---|---|---|---|---|
| [tool] | $[X] | $[X] | [use] | [count] |
| [tool] | $[X] | $[X] | [use] | [count] |
| Total | $[X] | $[X] |
ROI assessment
| Investment | Value Created | Evidence | Conviction | Verdict |
|---|---|---|---|---|
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | HIGH / MEDIUM / LOW / NONE | Keep / Scale / Cut |
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | HIGH / MEDIUM / LOW / NONE | Keep / Scale / Cut |
Tag each conviction level: HIGH = measured data, MEDIUM = reasonable estimate, LOW = assumption, NONE = guess.
Net ROI this quarter: Positive / Negative / Unclear Confidence: HIGH / MEDIUM / LOW / NONE — [why]
3. Capability Gap Analysis
Skills matrix
| Capability | Current Level | Required Level | Gap | Conviction | Action |
|---|---|---|---|---|---|
| Prompt engineering | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [training, hire, outsource] |
| AI tool selection | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| Data preparation | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| Workflow automation | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
| AI governance | None / Basic / Intermediate / Advanced | [target] | [size] | [H/M/L/N] | [action] |
Tribal balance
AI initiatives get stuck when teams are unbalanced. Three tribes need to work together:
| Tribe | Question They Ask | Your Team | Balance |
|---|---|---|---|
| Explorers — "What if we tried...?" | Who discovers options and watches the frontier? | [names] | Strong / Weak / Missing |
| Automators — "How do we operationalize?" | Who scales validated ideas and integrates systems? | [names] | Strong / Weak / Missing |
| Validators — "How do we ensure quality?" | Who sets standards, compliance, and trust? | [names] | Strong / Weak / Missing |
The failure pattern: Explorers generate pilots. Automators can't operationalize. Validators block everything. Pilot purgatory.
If one tribe is missing, that's your capability gap — not a tool problem.
Data readiness
- Do we have clean, structured data for our highest-priority AI use case?
- Is our data accessible (not locked in silos or legacy systems)?
- Do we have a data governance policy (who owns what, retention, privacy)?
- Are we collecting data we'll need in 6 months but don't use yet?