When more information hurts decisions, switch to testable bets

Hard - Requires significant effort Recommended

A common intuition says that more information leads to better decisions. In practice, past a modest threshold, extra information often amplifies confidence more than accuracy. There’s a famous finding from professional prediction tasks: after a small amount of data, forecasters’ correctness flatlines while their certainty keeps climbing. You’ve probably felt this—five tabs in, you were thoughtful; by tab thirty, you were certain.

One product lead recognized this pattern in herself. She set an information budget of three hours for vendor research and wrote her belief as a bet: “I’m 60% confident vendor A will outperform B on support response time and integration reliability.” Then she designed a small, risky test that could make her wrong: a one‑week pilot with five real tickets, timed and scored against a checklist. Her tea went cold as she wrote it, but the plan felt clean.

A week later, the surprising result: vendor B responded faster with fewer handoffs. She updated her odds and chose B, recording the lesson: glossy demos hid support bottlenecks, so future evaluations would include real tickets. She didn’t become omniscient. She became iterative.

Structurally, this approach works because it limits information to reduce overconfidence, forces you to encode your belief in numbers (which combats hindsight bias), and builds in falsifiability so your test can actually teach you. Updating odds after a time‑boxed experiment is Bayesian thinking in plain clothes: prior, likelihood, posterior. When you repeat this loop, your accuracy improves without drowning in data. More important, your decisions speed up.

Before your next choice, set an information budget—how many sources or hours you’ll allow—then write your belief as a simple bet with odds. Design a small test with a metric and time window that could prove you wrong, run it, and afterward update your odds and capture the lesson. Keep the loop tight enough to avoid analysis paralysis. Use this on one decision this week.

What You'll Achieve

Internally, you’ll feel less overwhelmed and less seduced by false certainty. Externally, you’ll make faster, more accurate calls by running small tests that teach you quickly.

Cap info, specify a disconfirming test

1

Set an information budget

Decide up front how many sources or hours you’ll spend before choosing. Constraints protect you from analysis paralysis.

2

Write your current best guess

State your decision as a bet with odds: “I’m 60% confident option A beats B.” This keeps you honest about uncertainty.

3

Design a small, risky test

Pick a metric and a time window that could prove you wrong. If it can’t disconfirm you, it won’t inform you.

4

Review outcome, update odds

After the test, update your belief in numbers, not vibes. Capture what you learned and what you’ll change next.

Reflection Questions

  • Where does more information start making me slower, not smarter?
  • What odds would I honestly put on my current favorite?
  • What small test could embarrass my belief—in a good way?
  • What lesson will I record to improve the next decision?

Personalization Tips

  • Work: Limit research on a tool to three hours, write a 60% A/40% B bet, and run a one‑week pilot.
  • Health: Cap nutrition reading to two articles, make a 70% bet on a simple plan, and track energy and sleep for 14 days.
  • Money: Pick two index funds within 48 hours and monitor a simple rule quarterly instead of doom‑scrolling daily.
Tribe of Mentors: Short Life Advice from the Best in the World
← Back to Book

Tribe of Mentors: Short Life Advice from the Best in the World

Timothy Ferriss 2017
Insight 6 of 10

Ready to Take Action?

Get the Mentorist app and turn insights like these into daily habits.