Chain of Thought
What happens to the decisions you never wrote down?
Explicit reasoning beats implicit reasoning — for humans AND machines. Thinking you can't see can't be debugged. Whether the thinker is biological or silicon, invisible reasoning produces invisible errors.
Implicit vs Explicit
| Dimension | Implicit Thinking | Chain of Thought |
|---|---|---|
| Reasoning | Happens inside your head, feels instant | Written out step by step |
| Errors | Invisible until the outcome fails | Visible at the step where reasoning breaks |
| Learning | Can't review what you can't see | Creates an audit trail you can revisit |
| Confidence | "I just know" (untestable) | "Here's why" (falsifiable) |
| Compounding | Each decision starts fresh | Past reasoning chains inform future ones |
Human Methods
Decision journals. Before the outcome is known, write: what you decided, why, what you expect to happen, how you feel about it. Review periodically. The gap between "what I thought would happen" and "what happened" is where learning lives. Most bad decisions come from defaults — inertia, ego, social pressure, emotion. The journal catches them by making reasoning explicit before the outcome biases your memory.
Feynman technique. Pick a concept. Write an explanation as if teaching a 12-year-old. Where you get stuck or resort to jargon, that's where you don't actually understand. Go back to source, fill the gap, rewrite. The act of externalizing forces precision.
Rubber duck debugging. A programmer explains their code, line by line, to a rubber duck. The bug reveals itself during the explanation. Articulation forces sequential processing — when code is "in your head" you skip steps. The skipped step is where the bug is.
Franklin's moral algebra. Not just a pros/cons list. Over three or four days, Franklin would add arguments to each column. Then cross-cancel: if one Pro equaled two Cons in weight, strike all three. The temporal process prevented recency bias. The cross-cancellation forced explicit weighting.
The AI Parallel
In 2022, Wei et al. at Google Brain found that asking AI to show its reasoning steps before answering boosted math accuracy from baseline to state-of-the-art. The same principle: explicit reasoning beats pattern-matching.
| Approach | What Happens | Result |
|---|---|---|
| Standard prompt | Model jumps to answer | Fragile on complex problems |
| Chain-of-thought | Model shows each step | Dramatically more accurate |
| Self-consistency | Multiple chains, majority vote | Even more robust |
The finding that matters: simply appending "Let's think step by step" to a prompt — with no examples — triggers chain-of-thought reasoning. The magic phrase works because it shifts from "retrieve an answer" mode to "generate a reasoning process" mode. Same shift that happens when a human picks up a pen.
As models improve at reasoning natively, explicit CoT provides diminishing returns. Same applies to humans — experts don't journal every decision, only ones at the edge of their competence.
When to Chain
Match reasoning depth to decision reversibility.
| Decision Type | Reasoning Depth | Example |
|---|---|---|
| Reversible, low-stakes | Don't chain. Decide. Move. | What to eat for lunch |
| Reversible, medium-stakes | Quick mental chain | Which library to use |
| Irreversible, high-stakes | Full chain: journal, sleep on it, review | Quitting a job, architecture choice |
| Recurring pattern | Chain once, then standardize into a protocol | Hiring decisions, investment criteria |
Bezos's framing: "Type 1" decisions (irreversible) deserve chain-of-thought rigor. "Type 2" decisions (reversible) should be made fast. Most decisions are Type 2, but organizations treat them all as Type 1.
The Shadow
Analysis paralysis. Over-documenting trivial decisions. System 2 over-engaged when the fear of error outweighs the realistic cost of being wrong. The chain-of-thought shadow is treating every decision as if it deserves a dissertation.
The Meta-Insight
This is systems thinking applied to your own cognition — a feedback loop on your reasoning process:
THINK → EXTERNALIZE → INSPECT → CORRECT → THINK BETTER
The Feynman Technique, decision journals, and CoT prompting are all instances of the same pattern: a system that observes its own processing and uses that observation to self-correct.
That's building a thinking system — which is the capability that compounds.
Context
- Decision Journal — The human implementation of chain of thought
- Meta-Learning — Learning applied to learning itself
- First Principles — Decomposition IS a chain of thought
- Inversion — Chain of thought run backwards
- Predictions — Documented reasoning chains tested against reality
- Decisions — Where chains become choices