Skip to main content

Claude Code Plans

How do you stop an LLM from scoring before researching, or building before defining the problem?

Plans are JSON-structured task DAGs that enforce execution order, quality gates, and cognitive mode switching. They are not a Claude Code feature — they are our own invention, sitting in .ai/plans/ and read by skills and agents as structured context.

Skills define WHAT to do. Plans add WHEN, in what ORDER, with what CHECKS, and how to RESUME.


Plans

FactDetail
WhatJSON task DAGs with phases, dependencies, quality gates, mindset switching, and token budgets
Where.ai/plans/{plan-name}/template.json + README.md
LoadedVia Step 0 in the matching skill. /create-prd and /depin-analysis read their plan as first action.
StatusCustom invention. Not a Claude Code native feature. Same pattern used in engineering repo as DB-native plans

Why Plans Exist

Skills define procedures. But procedures alone don't prevent:

Failure ModeWhat HappensPlan Fix
Scoring without researchLLM assigns scores from memorydepends_on blocks score phase until research outputs exist
Context blowoutWeb research fills the entire context windowestimated_tokens per task enables budget tracking
Lost progressSession dies mid-research, restart from scratchexecution_order + status tracking enables resumption
Wrong cognitive modeAnalyst mode during data gatheringmindset field switches cognitive approach per task
Sequential bottleneck4 research streams run one at a timeparallelizable_groups enables concurrent execution

Plan Templates

Four templates, each following the same JSON schema.

PlanPhasesTasksTokensPurpose
industry-analysis618~19.3kInvestment-grade industry analysis (5P)
create-prd713~7.1kPRD creation with hierarchy check
depin-analysis713~10.7kDePIN token evaluation against thesis
fact-star-migration510~7.4kContent restructure (FACT hub + STAR pages)

Industry Analysis

SCAFFOLD → ORIENT → ANALYZE → SYNTHESIZE → EVALUATE → ARTICULATE

The original plan template. Creates 5P directory structure, gathers primary data, applies Porter's/S-curve/VVFL frameworks, synthesizes transformation thesis, scores opportunity, produces validated MDX. Gold standard: telecom industry.

Parallelizable: analyze.1 + analyze.3 (Porter's + VVFL mapping). synthesize.1 + synthesize.2 (thesis + friction mapping).

Create PRD

COLLECT → DEFINE → HIERARCHY → SCORE → SCAFFOLD → REGISTER → VERIFY

Forces problem-first thinking. The hierarchy phase reads all existing PRDs and blocks creation if composition overlap exceeds 50% — prevents writing zoom levels as siblings. Scoring uses calibration examples from score-prds skill.

Parallelizable: score.1 + score.2 (Pain/Demand scored alongside Edge/Trend/Conversion).

Hard gate: score.3 requires user approval before proceeding. No auto-pilot past scoring.

DePIN Analysis

LOAD → RESEARCH → SCORE → STRESS → PREDICT → UPDATE → VERIFY

Evidence first, score second, prediction last. Loads existing knowledge before researching to avoid duplication. Research phase runs 4 parallel streams (protocol docs, on-chain data, news, competitors). Stress test phase runs 3 failure scenarios before any prediction is written.

Parallelizable: research.1 + research.2 + research.3 (three research streams). score.1 + score.2 (sections 1-5 alongside 6-10).

FACT-STAR Migration

AUDIT → BUILD → MOVE → TRANSFORM → VERIFY

Restructures content from containers of implementations to FACT hubs that link out to STAR implementations in their natural domain context. Audits current state, builds destination pages, moves content with link updates, transforms the source index, verifies no broken links.

Parallelizable: build.1 + build.2 + build.3 (destination pages built concurrently). move.1 + move.2 (independent file moves).


Schema

Every task in every plan follows the same structure.

{
"id": "research.1",
"name": "Protocol documentation",
"mindset": "researcher",
"skills": [],
"agents": ["Explore"],
"inputs": {
"required": ["load.2.outputs"],
"optional": []
},
"input_checks": [
{
"check": "Gaps identified",
"how": "Check load.2 knowledge_gaps",
"fallback": "Return to load phase"
}
],
"outputs": ["whitepaper_summary", "tokenomics_model", "technical_architecture"],
"quality_gate": "Whitepaper or equivalent documentation read.",
"depends_on": ["load.2"],
"estimated_tokens": 1500
}
FieldPurpose
idPhase.sequence — enables dependency references
mindsetCognitive mode (researcher, analyst, evaluator, writer)
skillsSkills to invoke during execution
agentsSubagent types to delegate to
inputs.requiredMust exist before task starts
input_checksVerification before execution, with fallback
outputsNamed artifacts produced
quality_gateMust pass before downstream tasks can start
depends_onDAG edges — blocks until listed tasks complete
estimated_tokensBudget for context management

Mindsets

Plans switch cognitive modes per task. Each mindset has a bias that overrides the default.

MindsetBiasUsed In
researcherCompleteness over speedData gathering, loading knowledge
analystRigor over intuitionFramework application, hierarchy
synthesizerCoherence over completenessPattern recognition, thesis building
evaluatorEvidence over opinionScoring, stress testing, judgment
writerClarity over clevernessPredictions, page updates, MDX
engineerWorking over perfectScaffolding, file creation
investigatorDepth over speedProblem definition, root cause
auditorCompleteness over speedVerification, maintenance checklists

Wiring

Plans connect to skills via Step 0 — the skill's first action is to read the plan JSON.

PlanSkillInvocationStep 0 Reads
create-prdcreate-prd/create-prd.ai/plans/create-prd/template.json
depin-analysisdepin-analysis/depin-analysis.ai/plans/depin-analysis/template.json
industry-analysis(no skill)ManualRead template directly
fact-star-migration(no skill)ManualRead template directly
/create-prd
→ user_prompt_submit.py routes to skill
→ SKILL.md loaded as context
→ Step 0: Read .ai/plans/create-prd/template.json
→ DAG enforces: COLLECT → DEFINE → HIERARCHY → SCORE → SCAFFOLD → REGISTER → VERIFY
→ Steps 1-5: execute within DAG structure

Plans without a matching skill are read directly when starting the workflow. The plan JSON is the orchestration layer; the skill is the execution layer.


Execution

Creating a Plan

cd .ai/plans/{plan-name}/
cp template.json {instance-name}.json
# Replace {{PLACEHOLDERS}} with actual values

Running a Plan

Follow execution_order array. For each task:

  1. Run input_checks — verify preconditions
  2. Activate mindset — switch cognitive mode
  3. Invoke skills and agents — do the work
  4. Produce outputs — create named artifacts
  5. Pass quality_gate — don't proceed until it passes

Tasks in the same parallelizable_groups entry can run concurrently.

Resuming a Plan

Update status field per task as work progresses:

{ "status": "complete", "completed_at": "2026-02-24T10:00:00Z" }

On resume, scan for first incomplete task in execution_order.


When to Create a Plan

Not every workflow needs a plan. Use this judgment criteria.

CriterionSkill SufficientPlan Needed
Single cognitive modeYesNo
Multi-phase with dependenciesNoYes
Needs quality gates between phasesMaybeYes
Resumability across sessionsNoYes
Parallelizable work streamsNoYes
Token budget pressureNoYes
Reused across instancesNoYes

Threshold: If 3+ criteria point to "Plan Needed", create the plan.


Plan vs Skill vs Script

Three layers, each with a role.

LayerWhatExampleExecutes
PlanTask DAG with gatescreate-prd/template.jsonRead by agent
SkillProcedure with stepscreate-prd/SKILL.mdInvoked by agent
ScriptDeterministic mathscripts/prioritise-prds.mjsRun by Node.js

Plans reference skills. Skills invoke scripts. The builder never validates their own work.

Plan (what order, what checks)
→ Skill (what to do, what quality)
→ Script (deterministic computation)

Evolution Path

Plans currently live as JSON files read by agents. The engineering repo has already evolved to the next stage.

StageWhereState TrackingExecution
File-based.ai/plans/*.jsonManual status updatesAgent reads JSON, follows order
DB-nativeEngineering plan-cli.tsConvex DB state machineplan advance CLI command
Agent-nativeFutureAgent protocolAgents create, advance, report

The dream repo uses file-based plans. The engineering repo uses DB-native plans via plan-cli.ts with state tracking in Convex. The graduation path: file → DB → agent protocol.

Context