Skip to main content

AI Strategy Review

Quarterly review to audit what AI is doing for your business, what it isn't, and what to change next quarter. Work through each section — the gaps are the strategy.

Review date: [YYYY-MM-DD] Reviewer: [Name / Role] Period under review: [Q_ 20__]


1. Current State Audit

What AI touches today

Map every place AI currently operates in your business. Be exhaustive — include the intern using ChatGPT for email drafts.

FunctionTool / ModelTaskHours Saved / WeekQuality (1-10)Owner
[dept][tool][what it does][estimate][score][who manages it]
[dept][tool][what it does][estimate][score][who manages it]
[dept][tool][what it does][estimate][score][who manages it]

What AI should touch but doesn't

FunctionTask Currently Done ManuallyHours Spent / WeekWhy Not Automated YetPriority
[dept][task][hours][blocker]High / Medium / Low
[dept][task][hours][blocker]High / Medium / Low

What AI touches but shouldn't

Not everything benefits from automation. Where has AI created more problems than it solved?

  • [Task / tool that isn't working — why, and what to revert to]
  • [Task where human judgment is non-negotiable]

2. Cost and ROI

Current AI spend

Tool / ServiceMonthly CostAnnual CostPrimary UseSeats / Users
[tool]$[X]$[X][use][count]
[tool]$[X]$[X][use][count]
Total$[X]$[X]

ROI assessment

InvestmentValue CreatedEvidenceVerdict
[tool/initiative][hours saved, revenue gained, errors prevented][data source]Keep / Scale / Cut
[tool/initiative][hours saved, revenue gained, errors prevented][data source]Keep / Scale / Cut

Net ROI this quarter: Positive / Negative / Unclear Confidence: High / Medium / Low — [why]


3. Capability Gap Analysis

Skills matrix

CapabilityCurrent LevelRequired LevelGapAction
Prompt engineeringNone / Basic / Intermediate / Advanced[target][size][training, hire, outsource]
AI tool selectionNone / Basic / Intermediate / Advanced[target][size][action]
Data preparationNone / Basic / Intermediate / Advanced[target][size][action]
Workflow automationNone / Basic / Intermediate / Advanced[target][size][action]
AI governanceNone / Basic / Intermediate / Advanced[target][size][action]

Data readiness

  • Do we have clean, structured data for our highest-priority AI use case?
  • Is our data accessible (not locked in silos or legacy systems)?
  • Do we have a data governance policy (who owns what, retention, privacy)?
  • Are we collecting data we'll need in 6 months but don't use yet?

4. Risk and Governance

Current controls

  • Usage policy — Written policy on acceptable AI use exists and is distributed
  • Data privacy — No customer PII sent to AI tools without consent / anonymisation
  • Output review — Human reviews AI-generated content before it reaches customers
  • Vendor risk — AI vendor contracts reviewed for data usage, IP, and liability
  • Bias checks — AI outputs tested for systematic errors or discrimination
  • Incident log — Record of AI failures, near-misses, and customer complaints

Regulatory exposure

Regulation / StandardApplies?Current ComplianceAction Required
Privacy Act / GDPRYes / NoCompliant / Gap[action]
Industry-specificYes / NoCompliant / Gap[action]
AI-specific (EU AI Act, etc.)Yes / NoCompliant / Gap[action]
Internal standardsYes / NoCompliant / Gap[action]

5. Competitive Position

Where competitors use AI

CompetitorKnown AI UsageAdvantage It Gives ThemOur Response
[name][what they do with AI][impact][match / leapfrog / ignore]
[name][what they do with AI][impact][match / leapfrog / ignore]

Displacement risk

  • Which of our revenue streams could an AI-native competitor undercut?
  • Which customer-facing processes are slow enough that a competitor could win on speed?
  • What would a new entrant with zero legacy systems build differently?

6. Next Quarter Priorities

Top 3 AI initiatives

Rank by impact, not ease. Each initiative needs an owner, a deadline, and a measurable outcome.

#InitiativeOwnerDeadlineSuccess MetricBudget
1[highest impact][name][date][measurable outcome]$[X]
2[second][name][date][measurable outcome]$[X]
3[third][name][date][measurable outcome]$[X]

What to stop

Priorities are about what you stop doing, not just what you start.

  • [Tool to cancel / initiative to kill — why]
  • [Process to revert to manual — why]

What to learn

TopicWhoMethodBy When
[skill gap from section 3][person/team][course, workshop, hire][date]
[emerging capability][person/team][method][date]

7. Review Gate

Before closing this review, verify:

  • Every AI tool in the business is listed in section 1 (including shadow IT)
  • ROI verdict is backed by evidence, not assumption
  • At least one "stop" item identified — if nothing to stop, look harder
  • Next quarter initiatives have owners and deadlines, not just descriptions
  • Governance controls checked against actual practice, not written policy
  • This review is saved where the next reviewer can find and build on it

Overall AI maturity assessment:

LevelDescriptionThis Quarter
1 — Ad hocIndividual tools, no coordination[ ]
2 — EmergingSome workflows automated, no strategy[ ]
3 — DefinedStrategy exists, initiatives tracked[ ]
4 — ManagedROI measured, governance in place[ ]
5 — OptimisedAI embedded in operations, compounding[ ]

Target level by next review: [1-5]


Context

Questions

Is your AI strategy driven by what's possible or by what's painful?

  • Which items in your "should touch but doesn't" list have been there for more than one quarter — and what does that reveal?
  • If you cut your AI budget by 50%, which tools would you keep — and does that match your stated priorities?
  • What data are you generating today that becomes a competitive moat in 12 months?