Skip to main content

Confirmation Bias

We seek information that confirms what we already believe — and ignore what contradicts it.

The Mechanism

Confirmation bias is the tendency to search for, interpret, and remember information in ways that confirm prior beliefs. It's not deliberate dishonesty — it's System 1 (fast, pattern-matching) overriding System 2 (slow, evaluative).

The bias is self-reinforcing: the stronger the belief, the more aggressively we filter for confirming evidence. Experts suffer from it more than novices in their domain — because they have more established beliefs to protect.

In analysis: A founder who believes their product is working will read ambiguous retention data as confirmation. A founder who doubts it will read the same data as alarm. Same data, opposite conclusions.

In hiring: Interviewers form impressions in the first 2 minutes, then spend the rest of the interview confirming them. The structured interview was invented specifically to counteract this.

In research: Cherry-picking studies, over-indexing on positive results, dismissing contradictory evidence as methodologically flawed — these are confirmation bias operating at institutional scale.

In markets: Investors who hold a thesis stay long through contradictory signals, waiting for the "real" signal that confirms the thesis. The thesis survives until the loss exceeds what the belief can absorb.

The disconfirmation protocol: Instead of asking "what confirms my view?" ask "what would prove me wrong — and does that evidence exist?" If you can't answer the second question, the belief isn't testable.

Context

  • Loss Aversion — Loss aversion intensifies confirmation bias in high-stakes decisions
  • Decision Making — Frameworks that build disconfirmation into the decision process
  • First Principles — Reasoning from evidence rather than from prior conclusions

Questions

How do you design a decision process that actively surfaces disconfirming evidence before a conclusion is reached?

  • At what point does deep domain expertise become a liability — when does knowing a lot make you more susceptible to confirmation bias in that domain?
  • How does confirmation bias interact with group dynamics — does dissent increase or decrease when the leader signals a strong prior?
  • If an AI model is trained on data that reflects human confirmation bias, does it inherit the bias or correct for it — and how would you test which?