Trust But Verify with Base Rates and Bayes

Hard - Requires significant effort Recommended

A police detective once charged a suspect with a crime after a single eyewitness identified him. The witness was 90% accurate in past lineups. But the neighborhood had only ten similar suspects, each equally likely. Applying Bayes’ Theorem shows the chance the suspect was innocent was still 47%—far from guilt beyond a reasonable doubt.

This isn’t just courtroom drama; it happens in startups, sales forecasting, and even medical tests. The math begins with the base rate: how often does this event actually occur? In the detective’s town, only one in ten residents matched the description. His witness’s 90% hit rate could still label four innocent bystanders mistakenly.

Bayes reminds us our brains want to jump from dramatic evidence to certainty. We hear “90% accurate!” and assume near-guarantee. But we must first anchor on frequency: the real-world rate before stories enter. Multiply that by accuracy, then divide by all positive IDs to get the posterior probability that aligns intuition with reality.

By folding in base rates before we embrace vivid anecdotes, studies show we avoid costly errors—like millions spent on products nobody buys or wasted lab time chasing ineffective compounds. Stories capture us, but numbers keep us honest.

This framework—Bayesian updating—uses both prior data and new evidence in a single formula, helping you trust evidence and verify it with clear math.

Next time you face a single dramatic clue—an endorsement, test result, or customer tip—pause and ask for the raw data. Note how many similar events occurred before, then factor in your or your expert’s hit rate. Tally those numbers in a quick formula and let the resulting odds guide your move rather than excitement or fear. Try it on your next big decision.

What You'll Achieve

You’ll replace gut-level leaps of faith with evidence-grounded judgments, cutting down on false alarms and missed opportunities. Internally, you’ll grow confident in your chance assessments; externally, you’ll allocate resources with precision.

Anchor Decisions in Frequency

1

Gather representative data

Start by finding frequency data for your question: how often did this event happen in similar cases? Record it in plain percentages or counts.

2

Assess your evidence’s accuracy

Estimate how reliably you identify the event—your own track record or an expert’s success rate on this task.

3

Run a quick Bayes check

Compute revised odds by multiplying base-rate hits by accuracy, dividing by total “positive” signals. Use this to adjust your gut instinct.

4

Weigh priors before stories

Whenever you hear a vivid example, first ask, “How often does this really happen?” and resist dramatic tales without numbers backing them up.

Reflection Questions

  • What big decision did I make without checking base rates?
  • Where can I collect reliable frequency data today?
  • Am I overvaluing vivid stories over solid numbers?

Personalization Tips

  • Before hiring based on a 90% accurate assessment test, compare test accuracy with your track record to avoid false positives.
  • When a medical study claims 80% success, check how many patients improved without treatment to gauge true benefit.
  • If 30% of your team missed deadlines, ask for data on peer performance before blaming individuals.
Seeking Wisdom: From Darwin To Munger
← Back to Book

Seeking Wisdom: From Darwin To Munger

Peter Bevelin 2003
Insight 4 of 7

Ready to Take Action?

Get the Mentorist app and turn insights like these into daily habits.