Skip to main content

Chain of Thought

What happens to the decisions you never wrote down?

Explicit reasoning beats implicit reasoning — for humans AND machines. Thinking you can't see can't be debugged. Whether the thinker is biological or silicon, invisible reasoning produces invisible errors.

Implicit vs Explicit

DimensionImplicit ThinkingChain of Thought
ReasoningHappens inside your head, feels instantWritten out step by step
ErrorsInvisible until the outcome failsVisible at the step where reasoning breaks
LearningCan't review what you can't seeCreates an audit trail you can revisit
Confidence"I just know" (untestable)"Here's why" (falsifiable)
CompoundingEach decision starts freshPast reasoning chains inform future ones

Human Methods

Decision journals. Before the outcome is known, write: what you decided, why, what you expect to happen, how you feel about it. Review periodically. The gap between "what I thought would happen" and "what happened" is where learning lives. Most bad decisions come from defaults — inertia, ego, social pressure, emotion. The journal catches them by making reasoning explicit before the outcome biases your memory.

Feynman technique. Pick a concept. Write an explanation as if teaching a 12-year-old. Where you get stuck or resort to jargon, that's where you don't actually understand. Go back to source, fill the gap, rewrite. The act of externalizing forces precision.

Rubber duck debugging. A programmer explains their code, line by line, to a rubber duck. The bug reveals itself during the explanation. Articulation forces sequential processing — when code is "in your head" you skip steps. The skipped step is where the bug is.

Franklin's moral algebra. Not just a pros/cons list. Over three or four days, Franklin would add arguments to each column. Then cross-cancel: if one Pro equaled two Cons in weight, strike all three. The temporal process prevented recency bias. The cross-cancellation forced explicit weighting.

The AI Parallel

In 2022, Wei et al. at Google Brain found that asking AI to show its reasoning steps before answering boosted math accuracy from baseline to state-of-the-art. The same principle: explicit reasoning beats pattern-matching.

ApproachWhat HappensResult
Standard promptModel jumps to answerFragile on complex problems
Chain-of-thoughtModel shows each stepDramatically more accurate
Self-consistencyMultiple chains, majority voteEven more robust

The finding that matters: simply appending "Let's think step by step" to a prompt — with no examples — triggers chain-of-thought reasoning. The magic phrase works because it shifts from "retrieve an answer" mode to "generate a reasoning process" mode. Same shift that happens when a human picks up a pen.

As models improve at reasoning natively, explicit CoT provides diminishing returns. Same applies to humans — experts don't journal every decision, only ones at the edge of their competence.

When to Chain

Match reasoning depth to decision reversibility.

Decision TypeReasoning DepthExample
Reversible, low-stakesDon't chain. Decide. Move.What to eat for lunch
Reversible, medium-stakesQuick mental chainWhich library to use
Irreversible, high-stakesFull chain: journal, sleep on it, reviewQuitting a job, architecture choice
Recurring patternChain once, then standardize into a protocolHiring decisions, investment criteria

Bezos's framing: "Type 1" decisions (irreversible) deserve chain-of-thought rigor. "Type 2" decisions (reversible) should be made fast. Most decisions are Type 2, but organizations treat them all as Type 1.

The Shadow

Analysis paralysis. Over-documenting trivial decisions. System 2 over-engaged when the fear of error outweighs the realistic cost of being wrong. The chain-of-thought shadow is treating every decision as if it deserves a dissertation.

The Meta-Insight

This is systems thinking applied to your own cognition — a feedback loop on your reasoning process:

THINK → EXTERNALIZE → INSPECT → CORRECT → THINK BETTER

The Feynman Technique, decision journals, and CoT prompting are all instances of the same pattern: a system that observes its own processing and uses that observation to self-correct.

That's building a thinking system — which is the capability that compounds.

Context