Use counterfactual thinking to detect empty claims and see reality
A powerful way to see through confident talk is to ask a simple question: would they say this even if it weren’t true? Imagine a spokesperson on a livestream insisting that a new feature is “revolutionary.” In a notebook, you write their sentence verbatim. Then you split the page: world A, the feature really is revolutionary; world B, it isn’t.
In world A, what would they say? Probably the same glowing phrase. In world B, what would they say? Strikingly, also the same phrase. If the statement doesn’t change across worlds, it’s not actually evidence. It’s theater. Your phone buzzes, you glance back at your notes, and the trick becomes obvious: look for divergence. If a claim would be uttered regardless of reality, treat it as noise until you find a test that could make a supporter change their mind.
Later that week, a colleague pitches an “AI‑powered” solution. You run the same test. In a true world, what’s different? Perhaps the model reduces false positives by 30% on last month’s data. In a false world, the vendor would still say “AI‑powered,” but the confusion matrix wouldn’t budge. So you ask for a blinded back‑test against your own dataset. Suddenly, the conversation turns from adjectives to numbers.
This is counterfactual reasoning, a staple of good inference. It prevents persuasion from hijacking your attention by forcing a claim to risk being wrong. You’re not being cynical, you’re clarifying signal from noise. Layer in incentives and you gain even more traction: people often say what serves their goals, not what updates your beliefs. Asking, “What evidence would change your mind?” invokes falsifiability, a hallmark of scientific thinking. When there is no such evidence, you’re not evaluating insight, you’re observing allegiance.
When you hear a big claim, write the exact words, sketch two worlds—one where the claim is true and one where it’s false—and ask if the phrasing would be different. If not, treat it as noise and request a measurable test that could disconfirm it, like a blinded back‑test on your data. Keep the focus on what would change a supporter’s mind. Use this on the next bold promise that lands in your inbox.
What You'll Achieve
Internally, you’ll feel less swayed by confident language and more anchored in evidence. Externally, you’ll save time and money by demanding tests that separate signal from noise.
Run the ‘would they say this anyway?’ test
Identify the claim
Write the exact statement you’re evaluating, from a pitch, policy, or opinion.
Generate two worlds
Ask, “If this claim were true, what would they say?” and “If this claim were false, what would they say?” Be concrete.
Compare for divergence
If the words would be the same in both worlds, the statement contains no information. Treat it as noise.
Seek disconfirming evidence
Look for a test or metric that would make a supporter change their mind. If none exists, note the incentive, not the insight.
Reflection Questions
- What would a believer have to see to admit this is wrong?
- If I couldn’t use adjectives, what metric would I require?
- What incentives might be shaping this statement?
- How can I design a low‑cost test that would change my mind?
Personalization Tips
- Work: A vendor promises “AI‑powered insights.” Ask what measurable output would be different if the AI didn’t exist.
- Health: An ad claims “clinically proven.” Ask for the study design and what a null result would have looked like.
- News: A pundit insists a policy is working. Ask what data would convince them it’s failing.
Tribe of Mentors: Short Life Advice from the Best in the World
Ready to Take Action?
Get the Mentorist app and turn insights like these into daily habits.