Guard Against Chasing Noise with Simple Models

Hard - Requires significant effort Recommended

A mid-sized tech firm struggled with wildly fluctuating sales forecasts. They had twenty metrics on their dashboard—web traffic, ad spend, click-through rates, social mentions, you name it. Every quarter, they’d tailor a new model, and every quarter, it missed the mark. Finance teams were paralyzed by complexity.

Enter a fresh-faced analyst armed with Occam’s razor and a pencil. She gathered the team and asked: what truly drives conversions? They landed on three clear factors—price, promotion depth, and time of year. She built a simple regression, tested it on the past two quarters, and found it reliably beat their twenty-variable models.

She then added one new factor at a time—new product launches, click-to-sale delay, and then marketing channel mix—each only if the simpler model fell short. They capped the entire process at seven variables.

The result? Forecasts were finally consistent, confidence in predictions soared, and the finance team stopped chasing false dawns in noisy signals. They’d regularized complexity and finally unlocked actionable insight.

Start by identifying the handful of factors that truly move the needle—no more than three. Build your first model around those drivers, then validate it against new results or an independent check. Only introduce a new factor when your simple version consistently fails, and impose a strict cap on total complexity. This disciplined approach will keep you focused on real signals and avoid the trap of noise masquerading as data.

What You'll Achieve

You’ll reduce analysis paralysis and improve forecast reliability. Externally, your models will be more transparent, faster to update, and more actionable.

Trim Unhelpful Complexity

1

List Your Core Factors

When facing a decision, jot down the three factors you believe matter most. Resist adding more until you’ve tested these first.

2

Measure Model Performance

Predict the outcome using only those three factors. Test it against any fresh data or a trusted secondary measure (e.g., peer feedback).

3

Add One Factor at a Time

Only if the simple model repeatedly misfires should you introduce a fourth factor. Repeat the test and compare performance.

4

Set a Complexity Cap

Decide on a maximum number of factors—five or fewer—and honor that cap to prevent overfitting to noisy data.

Reflection Questions

  • Which three factors can you start with for your next model?
  • How will you test your simple version before adding more complexity?
  • What cap will you impose to keep your models lean?

Personalization Tips

  • Before customizing your fitness plan, focus only on exercise frequency, session length, and sleep quality—see how well that predicts energy levels.
  • When selecting a new hire, start by evaluating experience, role fit, and culture match; only add more criteria if mis-hires spike.
  • To forecast sales, use just price, promotion spend, and seasonality—then introduce one more variable only if the model consistently fails.
Algorithms to Live By: The Computer Science of Human Decisions
← Back to Book

Algorithms to Live By: The Computer Science of Human Decisions

Brian Christian and Tom Griffiths 2016
Insight 7 of 8

Ready to Take Action?

Get the Mentorist app and turn insights like these into daily habits.