The Psychology of People

The Overconfidence Problem: Why Knowing More Can Make You Less Accurate

12:52 by The Observer
overconfidence biascognitive biasdecision makingexpert judgmentpsychology researchaccuracy vs confidenceprofessional biasdebiasingcalibrationjudgment error

Show Notes

Research reveals a troubling pattern among experts: more information increases confidence but not accuracy. In one striking study, giving subjects more information about a case raised their confidence from 33% to 53%, while their accuracy stayed below 30%. This episode explores overconfidence bias—the most common cognitive bias affecting professionals across medicine, law, finance, and management—and asks why more knowledge can paradoxically lead us further from the truth.

Why Your Most Confident Judgments Are Often Your Least Accurate

Research shows that more information makes experts feel more certain—without making them more right. Here's what that means for how you evaluate judgment.

You're sitting across from someone who spent twenty years mastering their field. Their credentials fill a wall. Their voice carries the unmistakable weight of someone who has seen it all. And something in that certainty feels reassuring—like you're in capable hands.

But what if the very quality that makes an expert sound authoritative has almost nothing to do with whether they're actually correct?

The Confidence Gap Nobody Talks About

Here's a finding that deserves more attention than it gets. Researchers gave subjects information about a complex case and asked them to make a judgment. Their confidence sat at thirty-three percent—appropriately humble.

Then the researchers handed them more data. More background. More details to work with. That confidence jumped to fifty-three percent. The subjects felt dramatically more certain about their conclusions.

But their accuracy never budged. It stayed below thirty percent.

Let that sink in. More information made people feel more right while leaving them just as wrong. And this isn't a quirk of one study—it's one of the most robust patterns in the entire psychology of judgment, replicated hundreds of times since Fischhoff, Slovic, and Lichtenstein established the overconfidence effect in 1977.

What makes this particularly unsettling: the effect is strongest precisely when confidence is highest. When you feel most certain, that's often when the gap between your conviction and reality is widest.

The Bias That Cuts Across Every Profession

A major systematic review examined cognitive biases across four professional domains: management, finance, medicine, and law. The researchers wanted to know which thinking errors appeared most consistently across these very different fields.

The answer surprised them. Not confirmation bias. Not anchoring. Not availability. Overconfidence emerged as the most recurrent bias affecting professional decisions across all four domains.

Among financial advisors, the pattern shows up in excessive trading and overestimated market-timing abilities. Among CEOs, it shapes strategic decisions—acquisitions, expansions, market entries—that affect thousands of employees and shareholders. Among doctors, it influences diagnoses that determine treatment paths.

The reason overconfidence persists is that it masquerades as competence. We're trained from childhood to project certainty. Uncertainty reads as weakness. Confidence reads as expertise. A doctor who says "I'm fairly confident this is what's happening" sounds less capable than one who says "Here's exactly what's wrong"—even when their actual accuracy is identical.

This creates a troubling incentive structure. Professionals who appropriately express uncertainty may be seen as less capable than those who project unwarranted confidence. The reward goes to the wrong people.

Why More Information Makes Things Worse

The study where confidence rose while accuracy flatlined reveals something important about how information affects our minds. More data gives us more material to build narratives. We find patterns. We construct explanations. Each new piece of information feels like it adds to our understanding.

And in a sense, it does add to our understanding. But additional information doesn't automatically improve prediction accuracy. Sometimes it just gives us more material to construct a wrong answer with greater conviction.

This becomes especially dangerous in fields where feedback is delayed or ambiguous. A surgeon gets immediate feedback—the patient recovers or doesn't. But a psychiatrist making a diagnosis might wait years to know if they were right. Financial advisors may never know if their advice was truly optimal—just whether the market happened to move in a favorable direction.

Without clear feedback, overconfidence grows unchecked. The confident feeling persists. The correction never arrives.

Why We Can't Just Think Our Way Out

Here's where the research gets humbling. Merely teaching people that biases exist appears insufficient to reduce them. You can't lecture your way out of overconfidence. Awareness alone doesn't protect you.

What does work requires specific conditions: abundant practice, clear outcome measures, task-specific feedback, and—perhaps most importantly—genuine admission that learning is needed. A 2025 study in Nature demonstrated that even a single debiasing training session could reduce confirmation bias in professional risk analysts. The effect was measurable.

But there's a catch. People who believe themselves less biased than others—a phenomenon called the bias blind spot—are actually more resistant to debiasing training. The people most convinced of their own objectivity may be the hardest to help. Their confidence in their unbiased judgment becomes itself a barrier to improvement.

Some researchers argue that structural interventions may be more effective than individual training. Rather than trying to fix biased minds, design systems that compensate for predictable human errors. Requiring probability estimates rather than certainty statements. Mandating second opinions. Creating formal dissent procedures. These changes can reduce overconfidence's impact even when individual bias remains.

What This Means for Evaluating Judgment

The first insight is counterintuitive: be especially skeptical of your own judgment when you feel most certain. That feeling of absolute certainty is precisely when the gap between confidence and accuracy tends to be widest. High confidence is a warning sign, not a green light.

When evaluating professionals, look for calibration rather than confidence. Does this person appropriately express uncertainty? Or do they present every statement with equal conviction? A well-calibrated expert will distinguish between what they know confidently, what they're uncertain about, and what lies beyond their expertise. Beware the professional who seems equally certain about everything.

And for your own decisions: keep a decision journal. Write down your predictions and your confidence level. Then go back and check. What did you say would happen? What actually happened? Most people who do this discover uncomfortable gaps between how sure they felt and how often they were right.

There's a question researchers have found particularly effective at puncturing overconfidence: What would have to be true for me to be wrong? This forces you to generate counterarguments, breaking the natural tendency to only consider evidence that supports your existing conclusion.

The goal probably isn't eliminating overconfidence entirely—that might not even be desirable. The goal is developing meta-awareness: knowing when to distrust your own certainty. You can still decide. You can still act. You can still hold beliefs. But you can hold those beliefs more loosely, building in space for the possibility that you're wrong.

The most accurate experts aren't the most confident ones. They're the ones who've learned to notice when their confidence exceeds their evidence—and adjust accordingly.

Download MP3