The Design Principle We Build Into Our Tools, But Ignore in Ourselves
We taught machines to express doubt to make better decisions. Maybe we should do the same.
Your weather app says there’s a 60% chance of rain. You leave the umbrella at home because “it’s not going to rain.” Your coworker asks if you want to grab coffee after work. You say “definitely” even though you have no idea if you’ll be tired, if something will come up, or if you’ll even want coffee in six hours.
We designed machines to admit doubt. To express their uncertainty. Some add disclaimers: “AI can make mistakes, check for accuracy.” Others qualify their answers: “Based on my knowledge…” or “I’m not entirely certain, but…” And some just give you the numbers straight up: 73% confident, 0.85 probability, 60% chance of rain. These are strategies for handling doubt. Design patterns built for transparency, calibration, trust.
But we often ignore those disclaimers and pretend we know better.
The irony is that we built entire frameworks to help machines admit doubt in order to make better decisions. To prevent overconfidence. To force calibration. And it works, AI systems that express uncertainty are more reliable, more trustworthy over time. Yet humans continue to express absolute confidence in our answers and opinions most of the time. We end up soaking wet in the rain without our umbrellas.
I’ve been thinking about this gap. Between how our tools think and how we think. Between the honesty we designed into AI and the certainty we can’t seem to shake out of ourselves. We gave machines confidence scores because we don’t trust them. Maybe it’s the other way around, the most useful thing AI has taught us is how (and why) to express doubt.
We build honesty into our tools (but not ourselves)
Confidence scores are everywhere. Weather apps, spam filters, image recognition. We’ve normalized probabilistic thinking in our tools. We made tools show their work, admit their limits because we’re betting that transparency builds trust. Over time, through repeated use, through seeing when it’s right and when it’s wrong. The honesty is what makes the tool reliable.
Turns out someone’s been teaching this framework for years, but not in tech. I’ve been reading Annie Duke’s Thinking in Bets, and her whole point is that life is more like poker than chess. Hidden information, uncertainty, luck everywhere. She’s made me want to learn poker. Not for the gambling. For the decision intelligence that poker players have. They’ve accepted something the rest of us haven’t. That you don’t control the outcomes. Only the process.
We’ve built this into machines. When a model assigns a confidence score it’s saying: “Based on everything I know, there’s an 80% chance I’m right.” And when AI is wrong at 90% confidence, engineers can improve it. There’s data. A feedback loop. The model gets better.
When you’re wrong while feeling certain though, you learn nothing. Or worse, you rationalize. And most of us still think in binary, whether we’ll admit it or not. Yes/no. Right/wrong. I’m sure/I have no idea.
Duke calls this “resulting,” judging decisions by outcomes instead of process. “What makes a decision great is not that it has a great outcome. A great decision is the result of a good process, and that process must include an attempt to accurately represent our own state of knowledge. That state of knowledge, in turn, is some variation of ‘I’m not sure.’”
I’m not sure. When’s the last time you said that about something important?
If you’re like me, you probably just thought: “All the time. I’m terrified of being wrong.” But we’re full of it (false humility, strategic vagueness…pick your poison!). Deep down, we hold certainty about things. We just hide behind “I’m not sure” so we have cover if it doesn’t work out. It’s not genuine doubt. It’s (most of the time) social protection.
That’s different from what Duke means. She’s talking about actually calibrating your confidence to reality, not performing uncertainty to avoid judgement. This is the difference between being unsure and hedging.
The Self-Driving Car Problem (And Why I Might Be Wrong)
The paradox we’ve all heard before is that self-driving cars are statistically safer than human drivers. But one accident destroys trust. Meanwhile, we trust our own driving completely. Even though we’re provably worse.
Is it because the AI shows its uncertainty that we don’t trust it? A car that says “I’m 85% sure this is safe” feels terrifying. I would not get in that car. But I hop into Lyfts to the airport on the regular without batting an eye.
Sometimes, overconfidence helps us take action instead of getting paralyzed. Athletes need conviction. Entrepreneurs need belief. If you’re constantly hedging, you might never jump. Of course there’s something to that. Doubt can be paralyzing. And some decisions require commitment, not calibration.
What I’m coming back to though, is that we’re uncomfortable with visible doubt even when it makes things safer. We want confidence even when confidence is a lie. A human driver who barrels through an intersection feels more trustworthy than an algorithm that admits a 5% chance of error? Even when the human is more likely to crash?
The thing that makes AI honest, admitting uncertainty, is the thing we resist in ourselves.
Your Future Self Is a Stranger (Maybe)
Research on temporal discounting tells us that people value immediate gains over future gains because we perceive our future selves differently from our present selves. Brain imaging studies show that when you think about yourself in ten years, your neural patterns look more like you’re thinking about a stranger than about you right now.
If your brain can’t tell the difference between Future You and some random person, why would you save money for them? Why would you make the hard choice today that only pays off later?
You’re making bets on behalf of someone you don’t even recognize as yourself.
“In most of our decisions, we are not betting against another person,” Duke writes. “Rather, we are betting against all the future versions of ourselves that we are not choosing.”
Every choice is a choice against other future. And you’re making these choices while overconfident about now and miscalibrated about later. Full disclosure here, I’m not sure the research means what I think it means. Sometimes treating your future self differently might be good. It can motivate change. And focusing too much on Future You can make you miserable in the present–something I explored in On Becoming a List of Further Possibilities.
There’s another angle here. In his TED Talk, Dan Gilbert calls it “the end of history illusion.” We tend to think the bulk of our personal change is behind us. That right now, in this moment, we’re basically done becoming who we are. This is why 18 year olds rush to get tattoos they’ll spend their 30s removing. “Why do we make decisions that our future selves so often regret?” Gilbert asks.
Because we see ourselves as finished products. We underestimate how much we’ll change and over invest in choices based solely on how they make us feel now. We’re overly confident our current preferences are permanent, even though they provably aren’t.
We’re transient, fleeting, temporary, even though in this moment we feel like our truest selves. Treating your future self differently is a problem. Being too certain about who you are right now is a problem. Both are calibration problems! One underestimates continuity, the other overestimates it. But both lead to bad decisions about time. Either way, we’re bad at caring about later.
We make bad decisions because of our misconception of time
What Would Change? (And Would it Even Help?)
Machine learning engineers obsess over calibration. Does a 70% confidence score actually mean 70%? They test, adjust, test again.
Most of us don’t calibrate ourselves. We don’t take the time to ask: “When I feel certain, how often am I actually right?” But some of the most successful poker players do.
Duke’s whole framework is about this. “Thinking in bets starts with recognizing that there are exactly two things that determine how our lives turn out: the quality of our decisions and luck. Learning to recognize the difference between the two is what thinking in bets is all about.”
So what if we tried? When you feel certain about something (career move, relationship decision, whether to take the leap) assign it a number. 70% sure this will work out for me. 80% sure this will work out for me.
Based on what?
Okay, so there’s an obvious problem with that. You can’t calculate those numbers for life decisions. You’re not running repeated trials for most of those things. You don’t marry the same person 100 times to see how often it works out. The numerical precision is false precision. We’re making up numbers here.
And for some people, the issue isn’t actually overconfidence as I mentioned before. It’s anxiety. Overthinking. Under-confidence. This whole framework might actually make already indecisive people more indecisive.
But the number doesn’t matter as much as the act of admitting you’re guessing. Of recognizing that you don’t actually know, even if it feels like you do. We demand confidence for trivial things. Is this a cat? Is this email spam? But we make life-altering decisions with false certainty. The higher the stakes, the more we should think in probabilities. But we do the opposite.
Our goal should be to try
“The secret is to make peace with walking around in a world where we recognize that we are not sure and that’s okay,” Duke writes. “As we learn more about how our brains operate, we recognize that we don’t perceive the world objectively. But our goal should be to try.”
Uncertainty feels like weakness. Like you’re not in control. Saying “I’m 90% sure this is the right answer” sounds a bit absurd when you’re used to saying “I know this is right.”
But which is more honest?
The whole thing sounds exhausting, right? Constantly calibrating yourself. Assigning probabilities to everything. Some decisions are better made with a gut feeling. And poker isn’t life. Most of what matters can’t be reduced to odds and expected value. But we can learn to think better from how we’ve designed our tools to think. We’ve already solved our overconfidence problem, we just forgot to apply it to ourselves.
The machines learned it. The poker plays learned it. Now it’s our turn.
What would you do differently if you admitted you were only 70% sure? What would you change if you knew Future You was still you, and that you’re not quite done becoming whoever that is yet?
This essay took 4 drafts. Thank you to my editor David.



