OODA Loop
Configuration is not a one-time setup. Features ship weekly. Practitioners innovate daily. A config that was optimal last month is leaving potential on the table today.
The OODA loop applied to AI coding tools:
OBSERVE → What shipped? What are practitioners doing?
ORIENT → Map new capabilities against current config. Score each affordance 0-5.
DECIDE → Adopt, reject, or investigate — with evidence, in this session.
ACT → Change the config. Update the baseline. Start the next cycle.
Daily
| Step | Action | Source |
|---|---|---|
| O | Check version. Scan changelog for new features since last session. | claude --version, changelog |
| O | Scan practitioner feeds for new patterns. | GitHub discussions, X, Reddit |
| O | For each new feature: are we using it? Should we be? | Feature → config comparison |
| D | Test each "should be = yes" feature NOW. | Load tool, run command, try config |
| A | Adopt or reject with evidence. Zero deferrals. Update baseline. | Config files, baseline.json |
Start an hourly background loop for the rest of the session:
/loop 1h check claude changelog and practitioner feeds for new features, test against current config
Weekly
Everything daily plus:
| Step | Action |
|---|---|
| O | Deep research: multi-agent patterns, hook innovations, token optimization, skill architecture |
| O | For every affordance scoring below 4, find a practitioner who scores higher |
| O | Inventory: count rules, commands, agents, skills, hooks. Diff against last week. |
| O | Check tool stack: CLIs available, MCP token tax, plugin health |
| D | Five specific fixes — prioritized: broken > redundant > missing > optimizing |
| A | Apply fixes. Update baseline. |
Monthly
Everything weekly plus:
| Step | Action |
|---|---|
| O | Build full capability map (20+ rows). Every feature that could change config. |
| O | Control system diagnosis: map every element to a concern and enforcement tier |
| O | Detect redundancy (rules duplicating hooks), gaps (concerns with no controller), misplacement (commands that should be skills) |
| D | Five fixes with file paths, changes, concern mapping, tier assignments |
| A | Apply fixes. Update baseline. Legacy Rule: improve the procedure itself. |
Affordance Tracking
Every AI coding tool feature is an affordance — a capability ceiling. Track not just whether you use each feature, but how much of its potential you extract.
| Affordance | Key question | Score 0-5 |
|---|---|---|
| Hooks | Is every moment that matters automated? | How many of 15 event types are wired? |
| Agents | Is every agent tuned to its exact job? | Model, effort, tools, memory, maxTurns all set? |
| Skills | Is every repeatable workflow a skill with gates? | Effort frontmatter, receipts, progressive disclosure? |
| Scheduling | Does work happen without being asked? | Session loops, cloud triggers, desktop tasks? |
| Memory | Does the agent start every session smarter? | Timestamps, agent memory, validation hooks? |
| Worktrees | Is every code-writing agent isolated? | Sparse checkout, default isolation? |
| Plugins | Are community patterns adopted? | Persistent state, inline sources? |
| MCP | Are the right tools connected at justified cost? | Token tax measured? Channels? Elicitation? |
| CLI flags | Are session behaviors optimized? | --bare, --name, --worktree, --from-pr? |
The test: Could someone starting fresh with these docs reach 80%+ affordance utilization? If not, the procedure has gaps.
Scheduling Tiers
| Tier | Mechanism | Persists? | Needs machine? | Min interval |
|---|---|---|---|---|
| Session | /loop, CronCreate | No — dies with session | Yes | 1 min |
| Cloud | /schedule, tasks at claude.ai/code/scheduled | Yes — Anthropic infra | No | 1 hour |
| Desktop | Desktop app scheduler | Yes — your machine | Yes (no open session needed) | 1 min |
Use session scheduling for in-session monitoring (link validation, context health). Use cloud scheduling for autonomous recurring work (daily scans, weekly reindex). Use desktop scheduling when you need local file access without an open session.
CLI vs MCP
MCP servers load tool definitions into context every session — that's a token tax whether the tools are used or not. CLIs cost zero until invoked.
| Use CLI when | Use MCP when |
|---|---|
| Tool invoked rarely | Tool invoked frequently with structured params |
| Simple text I/O | Complex structured I/O needed |
| Already works via Bash | Agent needs to discover it exists |
| Token budget is tight | Structured output justifies context cost |
Practitioner Tracking
The best innovations come from practitioners, not changelogs. Track who's pushing boundaries and what patterns they've discovered.
See Innovators for the current list. Update during every weekly scan — the community moves fast.
Context
- Config Architecture — components, enforcement hierarchy, two-repo setup
- Best Practices — principles that compound
- Innovators — practitioners to track weekly
- Cron — scheduling mechanics in depth
Questions
What percentage of your AI coding tool's affordances are you actually using — and what would 80% look like?
- OBSERVE: What feature shipped this week that you haven't tested yet? If the answer is "I don't know" — the loop is broken.
- ORIENT: For each affordance scoring below 4, who in the community has solved it? What can you steal?
- DECIDE: If you could only adopt one new feature today, which one compounds the most? What's the second-order effect?
- ACT: When was the last time you changed your config because a new feature shipped — not because something broke?