AI Strategy Review
Quarterly review to audit what AI is doing for your business, what it isn't, and what to change next quarter. Work through each section — the gaps are the strategy.
Review date: [YYYY-MM-DD] Reviewer: [Name / Role] Period under review: [Q_ 20__]
1. Current State Audit
What AI touches today
Map every place AI currently operates in your business. Be exhaustive — include the intern using ChatGPT for email drafts.
| Function | Tool / Model | Task | Hours Saved / Week | Quality (1-10) | Owner |
|---|---|---|---|---|---|
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
| [dept] | [tool] | [what it does] | [estimate] | [score] | [who manages it] |
What AI should touch but doesn't
| Function | Task Currently Done Manually | Hours Spent / Week | Why Not Automated Yet | Priority |
|---|---|---|---|---|
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
| [dept] | [task] | [hours] | [blocker] | High / Medium / Low |
What AI touches but shouldn't
Not everything benefits from automation. Where has AI created more problems than it solved?
- [Task / tool that isn't working — why, and what to revert to]
- [Task where human judgment is non-negotiable]
2. Cost and ROI
Current AI spend
| Tool / Service | Monthly Cost | Annual Cost | Primary Use | Seats / Users |
|---|---|---|---|---|
| [tool] | $[X] | $[X] | [use] | [count] |
| [tool] | $[X] | $[X] | [use] | [count] |
| Total | $[X] | $[X] |
ROI assessment
| Investment | Value Created | Evidence | Verdict |
|---|---|---|---|
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | Keep / Scale / Cut |
| [tool/initiative] | [hours saved, revenue gained, errors prevented] | [data source] | Keep / Scale / Cut |
Net ROI this quarter: Positive / Negative / Unclear Confidence: High / Medium / Low — [why]
3. Capability Gap Analysis
Skills matrix
| Capability | Current Level | Required Level | Gap | Action |
|---|---|---|---|---|
| Prompt engineering | None / Basic / Intermediate / Advanced | [target] | [size] | [training, hire, outsource] |
| AI tool selection | None / Basic / Intermediate / Advanced | [target] | [size] | [action] |
| Data preparation | None / Basic / Intermediate / Advanced | [target] | [size] | [action] |
| Workflow automation | None / Basic / Intermediate / Advanced | [target] | [size] | [action] |
| AI governance | None / Basic / Intermediate / Advanced | [target] | [size] | [action] |
Data readiness
- Do we have clean, structured data for our highest-priority AI use case?
- Is our data accessible (not locked in silos or legacy systems)?
- Do we have a data governance policy (who owns what, retention, privacy)?
- Are we collecting data we'll need in 6 months but don't use yet?
4. Risk and Governance
Current controls
- Usage policy — Written policy on acceptable AI use exists and is distributed
- Data privacy — No customer PII sent to AI tools without consent / anonymisation
- Output review — Human reviews AI-generated content before it reaches customers
- Vendor risk — AI vendor contracts reviewed for data usage, IP, and liability
- Bias checks — AI outputs tested for systematic errors or discrimination
- Incident log — Record of AI failures, near-misses, and customer complaints
Regulatory exposure
| Regulation / Standard | Applies? | Current Compliance | Action Required |
|---|---|---|---|
| Privacy Act / GDPR | Yes / No | Compliant / Gap | [action] |
| Industry-specific | Yes / No | Compliant / Gap | [action] |
| AI-specific (EU AI Act, etc.) | Yes / No | Compliant / Gap | [action] |
| Internal standards | Yes / No | Compliant / Gap | [action] |
5. Competitive Position
Where competitors use AI
| Competitor | Known AI Usage | Advantage It Gives Them | Our Response |
|---|---|---|---|
| [name] | [what they do with AI] | [impact] | [match / leapfrog / ignore] |
| [name] | [what they do with AI] | [impact] | [match / leapfrog / ignore] |
Displacement risk
- Which of our revenue streams could an AI-native competitor undercut?
- Which customer-facing processes are slow enough that a competitor could win on speed?
- What would a new entrant with zero legacy systems build differently?
6. Next Quarter Priorities
Top 3 AI initiatives
Rank by impact, not ease. Each initiative needs an owner, a deadline, and a measurable outcome.
| # | Initiative | Owner | Deadline | Success Metric | Budget |
|---|---|---|---|---|---|
| 1 | [highest impact] | [name] | [date] | [measurable outcome] | $[X] |
| 2 | [second] | [name] | [date] | [measurable outcome] | $[X] |
| 3 | [third] | [name] | [date] | [measurable outcome] | $[X] |
What to stop
Priorities are about what you stop doing, not just what you start.
- [Tool to cancel / initiative to kill — why]
- [Process to revert to manual — why]
What to learn
| Topic | Who | Method | By When |
|---|---|---|---|
| [skill gap from section 3] | [person/team] | [course, workshop, hire] | [date] |
| [emerging capability] | [person/team] | [method] | [date] |
7. Review Gate
Before closing this review, verify:
- Every AI tool in the business is listed in section 1 (including shadow IT)
- ROI verdict is backed by evidence, not assumption
- At least one "stop" item identified — if nothing to stop, look harder
- Next quarter initiatives have owners and deadlines, not just descriptions
- Governance controls checked against actual practice, not written policy
- This review is saved where the next reviewer can find and build on it
Overall AI maturity assessment:
| Level | Description | This Quarter |
|---|---|---|
| 1 — Ad hoc | Individual tools, no coordination | [ ] |
| 2 — Emerging | Some workflows automated, no strategy | [ ] |
| 3 — Defined | Strategy exists, initiatives tracked | [ ] |
| 4 — Managed | ROI measured, governance in place | [ ] |
| 5 — Optimised | AI embedded in operations, compounding | [ ] |
Target level by next review: [1-5]
Context
- AI Agents — Agent architecture and capabilities
- AI Coding Tools — Engineering-specific AI integration
- Business Idea Checklist — Full venture validation template
- Process Quality Assurance — Deming's 14 points for continuous improvement
Questions
Is your AI strategy driven by what's possible or by what's painful?
- Which items in your "should touch but doesn't" list have been there for more than one quarter — and what does that reveal?
- If you cut your AI budget by 50%, which tools would you keep — and does that match your stated priorities?
- What data are you generating today that becomes a competitive moat in 12 months?