Skip to main content

OODA Loop

Configuration is not a one-time setup. Features ship weekly. Practitioners innovate daily. A config that was optimal last month is leaving potential on the table today.

The OODA loop applied to AI coding tools:

OBSERVE  →  What shipped? What are practitioners doing?
ORIENT → Map new capabilities against current config. Score each affordance 0-5.
DECIDE → Adopt, reject, or investigate — with evidence, in this session.
ACT → Change the config. Update the baseline. Start the next cycle.

Daily

StepActionSource
OCheck version. Scan changelog for new features since last session.claude --version, changelog
OScan practitioner feeds for new patterns.GitHub discussions, X, Reddit
OFor each new feature: are we using it? Should we be?Feature → config comparison
DTest each "should be = yes" feature NOW.Load tool, run command, try config
AAdopt or reject with evidence. Zero deferrals. Update baseline.Config files, baseline.json

Start an hourly background loop for the rest of the session:

/loop 1h check claude changelog and practitioner feeds for new features, test against current config

Weekly

Everything daily plus:

StepAction
ODeep research: multi-agent patterns, hook innovations, token optimization, skill architecture
OFor every affordance scoring below 4, find a practitioner who scores higher
OInventory: count rules, commands, agents, skills, hooks. Diff against last week.
OCheck tool stack: CLIs available, MCP token tax, plugin health
DFive specific fixes — prioritized: broken > redundant > missing > optimizing
AApply fixes. Update baseline.

Monthly

Everything weekly plus:

StepAction
OBuild full capability map (20+ rows). Every feature that could change config.
OControl system diagnosis: map every element to a concern and enforcement tier
ODetect redundancy (rules duplicating hooks), gaps (concerns with no controller), misplacement (commands that should be skills)
DFive fixes with file paths, changes, concern mapping, tier assignments
AApply fixes. Update baseline. Legacy Rule: improve the procedure itself.

Affordance Tracking

Every AI coding tool feature is an affordance — a capability ceiling. Track not just whether you use each feature, but how much of its potential you extract.

AffordanceKey questionScore 0-5
HooksIs every moment that matters automated?How many of 15 event types are wired?
AgentsIs every agent tuned to its exact job?Model, effort, tools, memory, maxTurns all set?
SkillsIs every repeatable workflow a skill with gates?Effort frontmatter, receipts, progressive disclosure?
SchedulingDoes work happen without being asked?Session loops, cloud triggers, desktop tasks?
MemoryDoes the agent start every session smarter?Timestamps, agent memory, validation hooks?
WorktreesIs every code-writing agent isolated?Sparse checkout, default isolation?
PluginsAre community patterns adopted?Persistent state, inline sources?
MCPAre the right tools connected at justified cost?Token tax measured? Channels? Elicitation?
CLI flagsAre session behaviors optimized?--bare, --name, --worktree, --from-pr?

The test: Could someone starting fresh with these docs reach 80%+ affordance utilization? If not, the procedure has gaps.

Scheduling Tiers

TierMechanismPersists?Needs machine?Min interval
Session/loop, CronCreateNo — dies with sessionYes1 min
Cloud/schedule, tasks at claude.ai/code/scheduledYes — Anthropic infraNo1 hour
DesktopDesktop app schedulerYes — your machineYes (no open session needed)1 min

Use session scheduling for in-session monitoring (link validation, context health). Use cloud scheduling for autonomous recurring work (daily scans, weekly reindex). Use desktop scheduling when you need local file access without an open session.

CLI vs MCP

MCP servers load tool definitions into context every session — that's a token tax whether the tools are used or not. CLIs cost zero until invoked.

Use CLI whenUse MCP when
Tool invoked rarelyTool invoked frequently with structured params
Simple text I/OComplex structured I/O needed
Already works via BashAgent needs to discover it exists
Token budget is tightStructured output justifies context cost

Practitioner Tracking

The best innovations come from practitioners, not changelogs. Track who's pushing boundaries and what patterns they've discovered.

See Innovators for the current list. Update during every weekly scan — the community moves fast.

Context

Questions

What percentage of your AI coding tool's affordances are you actually using — and what would 80% look like?

  • OBSERVE: What feature shipped this week that you haven't tested yet? If the answer is "I don't know" — the loop is broken.
  • ORIENT: For each affordance scoring below 4, who in the community has solved it? What can you steal?
  • DECIDE: If you could only adopt one new feature today, which one compounds the most? What's the second-order effect?
  • ACT: When was the last time you changed your config because a new feature shipped — not because something broke?