Skip to main content

Forecasting

What does the future require to be true?

Superforecasting is disciplined prediction—probabilistic thinking, intellectual humility, continuous learning.

Prompts

Before making any prediction:

  1. "What does the future require to be true?" — Map the dependencies
  2. "What's already here that I'm not seeing?" — Spot the unevenly distributed
  3. "What would make my conclusions wrong?" — Name the falsifying conditions
  4. "What's my confidence level (0-100%)?" — Quantify, don't handwave
  5. "When will I know if I was right?" — Set a resolution date

See Prompting Best Practices for more.

Principles

Below are the main principles and steps to practice superforecasting effectively.

  1. Think in Probabilities
    • Avoid binary predictions; assign probabilities to outcomes and update them as new information arises.
    • Use Bayesian reasoning to refine forecasts.
  2. Break Down Problems Decompose complex questions into smaller, manageable sub-questions (e.g., using Fermi estimation).
  3. Update Beliefs Dynamically
    • Treat beliefs as hypotheses that can be revised based on evidence.
    • Embrace "strong opinions loosely held" to adapt quickly to new data.
  4. Avoid Cognitive Biases Actively counter biases like overconfidence, anchoring, and confirmation bias.
  5. Balance Perspectives Combine inside views (specific expertise) with outside views (historical data or broader trends).
  6. Track Accuracy Measure the quality of predictions using tools like Brier scores to identify areas for improvement.
  7. Be Open-Minded and Humble Recognize the complexity of reality and approach forecasting with intellectual humility.

Process

Backward Reasoning: Imagining an ideal future and reasoning backward to identify potential obstacles or risks that could prevent humanity from achieving that outcome.

Forward Modeling: Reasoning through and modeling what societal, economic, and technological changes are to come. This includes anticipating rapid advancements and their cascading effects.

  • Frame questions precisely with specific timeframes and measurable outcomes (e.g., "What is the likelihood of X happening by Y date?").
  • Collect diverse data from multiple sources, including historical trends, expert opinions, and current developments.
  • Decompose the main question into smaller components to make it more manageable. Example: To estimate a city's EV adoption rate, analyze factors like infrastructure readiness, consumer preferences, and policy incentives.
  • Start with a base rate from historical data or similar scenarios before adjusting for current factors.
  • Use probabilistic reasoning to estimate the likelihood of each potential outcome.
  • Regularly revise forecasts as new information becomes available or circumstances change.
  • Keep a record of your reasoning, assumptions, and probability estimates for future review.
  • Work in diverse teams to incorporate multiple perspectives and reduce blind spots.
  • Evaluate the accuracy of past predictions and refine your process based on lessons learned.

Discipline

Disciplines for Effective Superforecasting:

  1. Curiosity: Maintain an insatiable desire to understand how systems work and ask "what if" questions.
  2. Numeracy: Be comfortable working with probabilities and statistical concepts.
  3. Open-Mindedness: Be willing to consider alternative viewpoints and revise your opinions.
  4. Resilience: Cultivate grit to persist through uncertainty and setbacks.
  5. Communication: Share forecasts clearly and collaborate effectively with others.
  6. Reflection: Regularly review past forecasts to identify strengths and weaknesses in your approach.

Forecasting as Credibility

Every prediction is a bet on your judgment. Tracked over time, your forecasting record becomes your credibility score.

Forecasting habitWhat it buildsWhat it costs when wrong
Specific predictions with datesVerifiable claims others can trackVisible failure — but visible failure with honesty builds more trust than vague success
Conviction tags (HIGH/MEDIUM/LOW)Calibration — others know how much to weight your viewHIGH conviction + wrong = expensive. But never stating HIGH conviction = never being trusted
Kill criteria stated in advanceIntellectual honestyNothing — naming failure conditions is free and compounds credibility
Bayesian updatingCompound credibility — you get better over timeStubbornness costs more than being wrong once

The connection to behavioural biases: overconfidence, confirmation bias, and anchoring are the three biases that most damage forecasting accuracy. Self-mastery of these biases IS forecasting skill.

Context

Questions

If your prediction track record IS your credibility, what are you predicting right now that can be scored in 90 days?

  • Which cognitive bias has cost you the most forecasting accuracy — and do you have a mantra for it?
  • When you tag a prediction HIGH conviction and it turns out wrong, does that damage your credibility more than never making HIGH conviction predictions at all?
  • What's the minimum number of scored predictions before your credibility score means anything?