Use algorithms and checklists to outperform expert intuition when stakes are high
A city agency believed seasoned staff could spot risk on sight. They reviewed cases face to face and prided themselves on reading people. A small team quietly piloted a four‑factor checklist built from outcomes: prior no‑shows, stability markers, severity of the current issue, and support access. The tool didn’t decide cases, it flagged the top 10% for closer review and recommended release for the bottom 30%.
After a month, the results stung. The checklist’s flagged group re‑offended 25% less when held, and the low‑risk group did fine when released. Staff had been over‑detaining charming people with poor records and under‑detaining quiet people with risk flags. A supervisor admitted, “We meant well. Our read just wasn’t consistent.” They kept the face time, but sequenced it after the checklist.
The rollout wasn’t smooth. A veteran grumbled that a card couldn’t see what he saw. Then a tough case hit the desk. The coffee on the counter cooled as the team compared notes. The checklist said escalate. The veteran said release. Two weeks later, the veteran asked for the latest version of the card.
The lesson wasn’t that humans don’t matter. It was that structure beats vibe for ranking risk, and humans are better at explaining, coaching, and making exceptions. When your tool stays humble and observable, people adopt it. When it lights up early and clearly, teams do better work with less drama.
Pick one repeatable decision that matters, then write down three to five simple variables you can observe consistently. Give each a rough weight and set a threshold to either pause or escalate, not to replace your judgment. Run the tool in parallel for two weeks and note where it disagrees with your gut, then check outcomes and tweak weights where the tool did better. Keep the final say human, but let the card speak first. Try it with one decision this month and see what shifts.
What You'll Achieve
Internally, reduce overconfidence and decision fatigue by sequencing structure before intuition. Externally, improve accuracy, consistency, and fairness in repeatable high‑stakes calls.
Build a tiny decision aid today
Choose one decision type
Select a repeatable decision with real consequences, like bail‑like risk scores, loan approvals, or classroom referrals.
List 3–5 predictive variables
Pick simple, available factors linked to outcomes, such as prior attendance, task complexity, or repayment history. Keep it humble and transparent.
Assign weights and thresholds
Use rough weights (1–3) based on history or published data. Set a threshold that triggers pause or escalation, not an automatic yes/no.
Pilot and compare
Run the aid for two weeks in parallel with your usual judgment. Track accuracy and disagreement cases. Adjust weights where the aid beats you.
Reflection Questions
- Which decision drains the most energy yet repeats often?
- What three variables are easy to observe and linked to outcomes?
- How will I review disagreements without blame?
- What language helps people see the tool as a helper?
Personalization Tips
- School: Use a short checklist to decide tutoring vs. discipline for tardiness based on patterns, not mood.
- Healthcare: A triage card with 4 vital signs and red‑flag symptoms routes scarce attention fairly.
Talking to Strangers: What We Should Know About the People We Don’t Know
Ready to Take Action?
Get the Mentorist app and turn insights like these into daily habits.