Skip to main content

Chain of Thought

What happens to the decisions you never wrote down?

Explicit reasoning beats implicit reasoning — for humans AND machines. Thinking you can't see can't be debugged. Whether the thinker is biological or silicon, invisible reasoning produces invisible errors.

Implicit vs Explicit

DimensionImplicit ThinkingChain of Thought
ReasoningHappens inside your head, feels instantWritten out step by step
ErrorsInvisible until the outcome failsVisible at the step where reasoning breaks
LearningCan't review what you can't seeCreates an audit trail you can revisit
Confidence"I just know" (untestable)"Here's why" (falsifiable)
CompoundingEach decision starts freshPast reasoning chains inform future ones

Human Methods

Decision journals. Before the outcome is known, write: what you decided, why, what you expect to happen, how you feel about it. Review periodically. The gap between "what I thought would happen" and "what happened" is where learning lives. Most bad decisions come from defaults — inertia, ego, social pressure, emotion. The journal catches them by making reasoning explicit before the outcome biases your memory.

Feynman technique. Pick a concept. Write an explanation as if teaching a 12-year-old. Where you get stuck or resort to jargon, that's where you don't actually understand. Go back to source, fill the gap, rewrite. The act of externalizing forces precision.

Rubber duck debugging. A programmer explains their code, line by line, to a rubber duck. The bug reveals itself during the explanation. Articulation forces sequential processing — when code is "in your head" you skip steps. The skipped step is where the bug is.

Franklin's moral algebra. Not just a pros/cons list. Over three or four days, Franklin would add arguments to each column. Then cross-cancel: if one Pro equaled two Cons in weight, strike all three. The temporal process prevented recency bias. The cross-cancellation forced explicit weighting.

The AI Parallel

In 2022, Wei et al. at Google Brain found that asking AI to show its reasoning steps before answering boosted math accuracy from baseline to state-of-the-art. The same principle: explicit reasoning beats pattern-matching.

ApproachWhat HappensResult
Standard promptModel jumps to answerFragile on complex problems
Chain-of-thoughtModel shows each stepDramatically more accurate
Self-consistencyMultiple chains, majority voteEven more robust

The finding that matters: simply appending "Let's think step by step" to a prompt — with no examples — triggers chain-of-thought reasoning. The magic phrase works because it shifts from "retrieve an answer" mode to "generate a reasoning process" mode. Same shift that happens when a human picks up a pen.

As models improve at reasoning natively, explicit CoT provides diminishing returns. Same applies to humans — experts don't journal every decision, only ones at the edge of their competence.

When to Chain

Match reasoning depth to decision reversibility.

Decision TypeReasoning DepthExample
Reversible, low-stakesDon't chain. Decide. Move.What to eat for lunch
Reversible, medium-stakesQuick mental chainWhich library to use
Irreversible, high-stakesFull chain: journal, sleep on it, reviewQuitting a job, architecture choice
Recurring patternChain once, then standardize into a protocolHiring decisions, investment criteria

Bezos's framing: "Type 1" decisions (irreversible) deserve chain-of-thought rigor. "Type 2" decisions (reversible) should be made fast. Most decisions are Type 2, but organizations treat them all as Type 1.

The Shadow

Analysis paralysis. Over-documenting trivial decisions. System 2 over-engaged when the fear of error outweighs the realistic cost of being wrong. The chain-of-thought shadow is treating every decision as if it deserves a dissertation.

The Compression Arc

Chain of thought is not the destination. It is the starting point.

Full chain → Pattern → Tight Five → Mantra

You chain first — every step explicit, every assumption named. Over time, patterns emerge. The patterns compress into handles you can hold under pressure. The handles compress into one phrase that resets you when conditions are noisy.

StageWhat It Looks LikeExample
Full chainPages of explicit reasoningDecision journal entry
Pattern"When X, I tend to Y"Recurring bias identified
Tight Five5 compressed handlesFive questions that never stop mattering
MantraOne phrase, instant reset"Character before capability"

A tight five is not short because it is brief. It is short because it has been cut until only signal remains. That cutting happened through chains of thought — hundreds of them — until the pattern was clear enough to compress.

A mantra is a chain so well-practiced it became instinct.

The Prerequisite

Chain of thought only works if you are willing to externalize. The person who "just knows" has a closed feedback loop — no input port. The same closed loop as someone who cannot be taught because they already believe they know.

Ego closes the chain. Willingness opens it. Character before capability — because the willingness to show your reasoning, be wrong, and correct is a character trait, not a skill.

CoT is also a teaching instrument that works without a teacher. You externalize, inspect, correct — no coach required. The rubber duck doesn't talk back. That's the point.

The Meta-Insight

This is systems thinking applied to your own cognition — a feedback loop on your reasoning process:

THINK → EXTERNALIZE → INSPECT → CORRECT → THINK BETTER

The Feynman Technique, decision journals, and CoT prompting are all instances of the same pattern: a system that observes its own processing and uses that observation to self-correct.

That's building a thinking system — which is the capability that compounds.

Context

Questions

What decision are you making right now with implicit reasoning that deserves a chain?

  • At what point does a chain of thought become a tight five — and how do you know the compression is ready?
  • If ego closes the chain, what practice opens it again?
  • Which of your recurring decisions still lacks a standardized protocol — and what is that costing you?