Chain of Thought
What happens to the decisions you never wrote down?
Explicit reasoning beats implicit reasoning — for humans AND machines. Thinking you can't see can't be debugged. Whether the thinker is biological or silicon, invisible reasoning produces invisible errors.
Implicit vs Explicit
| Dimension | Implicit Thinking | Chain of Thought |
|---|---|---|
| Reasoning | Happens inside your head, feels instant | Written out step by step |
| Errors | Invisible until the outcome fails | Visible at the step where reasoning breaks |
| Learning | Can't review what you can't see | Creates an audit trail you can revisit |
| Confidence | "I just know" (untestable) | "Here's why" (falsifiable) |
| Compounding | Each decision starts fresh | Past reasoning chains inform future ones |
Human Methods
Decision journals. Before the outcome is known, write: what you decided, why, what you expect to happen, how you feel about it. Review periodically. The gap between "what I thought would happen" and "what happened" is where learning lives. Most bad decisions come from defaults — inertia, ego, social pressure, emotion. The journal catches them by making reasoning explicit before the outcome biases your memory.
Feynman technique. Pick a concept. Write an explanation as if teaching a 12-year-old. Where you get stuck or resort to jargon, that's where you don't actually understand. Go back to source, fill the gap, rewrite. The act of externalizing forces precision.
Rubber duck debugging. A programmer explains their code, line by line, to a rubber duck. The bug reveals itself during the explanation. Articulation forces sequential processing — when code is "in your head" you skip steps. The skipped step is where the bug is.
Franklin's moral algebra. Not just a pros/cons list. Over three or four days, Franklin would add arguments to each column. Then cross-cancel: if one Pro equaled two Cons in weight, strike all three. The temporal process prevented recency bias. The cross-cancellation forced explicit weighting.
The AI Parallel
In 2022, Wei et al. at Google Brain found that asking AI to show its reasoning steps before answering boosted math accuracy from baseline to state-of-the-art. The same principle: explicit reasoning beats pattern-matching.
| Approach | What Happens | Result |
|---|---|---|
| Standard prompt | Model jumps to answer | Fragile on complex problems |
| Chain-of-thought | Model shows each step | Dramatically more accurate |
| Self-consistency | Multiple chains, majority vote | Even more robust |
The finding that matters: simply appending "Let's think step by step" to a prompt — with no examples — triggers chain-of-thought reasoning. The magic phrase works because it shifts from "retrieve an answer" mode to "generate a reasoning process" mode. Same shift that happens when a human picks up a pen.
As models improve at reasoning natively, explicit CoT provides diminishing returns. Same applies to humans — experts don't journal every decision, only ones at the edge of their competence.
When to Chain
Match reasoning depth to decision reversibility.
| Decision Type | Reasoning Depth | Example |
|---|---|---|
| Reversible, low-stakes | Don't chain. Decide. Move. | What to eat for lunch |
| Reversible, medium-stakes | Quick mental chain | Which library to use |
| Irreversible, high-stakes | Full chain: journal, sleep on it, review | Quitting a job, architecture choice |
| Recurring pattern | Chain once, then standardize into a protocol | Hiring decisions, investment criteria |
Bezos's framing: "Type 1" decisions (irreversible) deserve chain-of-thought rigor. "Type 2" decisions (reversible) should be made fast. Most decisions are Type 2, but organizations treat them all as Type 1.
The Shadow
Analysis paralysis. Over-documenting trivial decisions. System 2 over-engaged when the fear of error outweighs the realistic cost of being wrong. The chain-of-thought shadow is treating every decision as if it deserves a dissertation.
The Compression Arc
Chain of thought is not the destination. It is the starting point.
Full chain → Pattern → Tight Five → Mantra
You chain first — every step explicit, every assumption named. Over time, patterns emerge. The patterns compress into handles you can hold under pressure. The handles compress into one phrase that resets you when conditions are noisy.
| Stage | What It Looks Like | Example |
|---|---|---|
| Full chain | Pages of explicit reasoning | Decision journal entry |
| Pattern | "When X, I tend to Y" | Recurring bias identified |
| Tight Five | 5 compressed handles | Five questions that never stop mattering |
| Mantra | One phrase, instant reset | "Character before capability" |
A tight five is not short because it is brief. It is short because it has been cut until only signal remains. That cutting happened through chains of thought — hundreds of them — until the pattern was clear enough to compress.
A mantra is a chain so well-practiced it became instinct.
The Prerequisite
Chain of thought only works if you are willing to externalize. The person who "just knows" has a closed feedback loop — no input port. The same closed loop as someone who cannot be taught because they already believe they know.
Ego closes the chain. Willingness opens it. Character before capability — because the willingness to show your reasoning, be wrong, and correct is a character trait, not a skill.
CoT is also a teaching instrument that works without a teacher. You externalize, inspect, correct — no coach required. The rubber duck doesn't talk back. That's the point.
The Meta-Insight
This is systems thinking applied to your own cognition — a feedback loop on your reasoning process:
THINK → EXTERNALIZE → INSPECT → CORRECT → THINK BETTER
The Feynman Technique, decision journals, and CoT prompting are all instances of the same pattern: a system that observes its own processing and uses that observation to self-correct.
That's building a thinking system — which is the capability that compounds.
Context
- Decision Journal — The human implementation of chain of thought
- Meta-Learning — Learning applied to learning itself
- First Principles — Decomposition IS a chain of thought
- Inversion — Chain of thought run backwards
- Predictions — Documented reasoning chains tested against reality
- Decisions — Where chains become choices
- Tight Five — What chain of thought compresses into
- Character vs Capability — Willingness to externalize is character, not skill
Questions
What decision are you making right now with implicit reasoning that deserves a chain?
- At what point does a chain of thought become a tight five — and how do you know the compression is ready?
- If ego closes the chain, what practice opens it again?
- Which of your recurring decisions still lacks a standardized protocol — and what is that costing you?