You've seen the videos. A celebrity saying something career-ending. A CEO announcing a merger that sends stocks tumbling. Your grandmother on a video call—except something's off about her smile.
For centuries, video evidence was the gold standard. Courts staked verdicts on it. Journalists built careers around it. And ordinary people trusted their own eyes above all else. That era is collapsing. And what's replacing it might be worse than the fakes themselves.
The 900% Explosion Nobody's Ready For
In 2023, researchers counted roughly 500,000 deepfakes circulating online. A concerning number, sure. But it was just the opening act.
By 2025, that figure detonated to 8 million. We're not talking gradual growth—we're talking a 900% annual increase. The tools that once required Hollywood budgets and PhD expertise are now free apps. Consumer products. One-click downloads.
OpenAI's Sora video generator was burning through $15 million daily in computing costs before they pulled the plug in March 2026. But shutting down one tool doesn't cork the bottle. A dozen more are being uncorked as you read this.
And here's what keeps security researchers staring at their ceilings at 3 AM: the real threat isn't the fakes. It's what happens when everyone knows fakes exist.
The Liar's Dividend: Weaponized Doubt
There's a concept called the "liar's dividend." Sounds like academic jargon, but the idea is devastatingly simple.
Once people know convincing fakes exist, they start doubting everything—including real evidence. Especially when that evidence is inconvenient.
Politicians have figured this out. So have criminals. And increasingly, so has anyone caught on camera doing something they shouldn't.
The Brennan Center for Justice reports that public figures can now dismiss authentic recordings as AI-generated—and a growing percentage of the public will believe them. Poynter puts it bluntly: politicians and even police have learned they can deny, invoke AI, and move on. Whether the claim holds up almost doesn't matter.
Think about the mechanics here. A politician gets caught on video taking a bribe. They call it a deepfake. By the time forensic analysis proves otherwise—if it ever does—the news cycle has moved on. The denial is what sticks.
Video evidence used to settle arguments. Now it starts them.
Real Chaos, Real Money, Real Victims
This isn't theoretical doom-scrolling. In India, the Bombay Stock Exchange had to issue an emergency warning after a deepfake of their CEO appeared, sharing fake stock tips. Real money moved. Real investors made decisions based on words that person never spoke.
But here's the twist: after that incident, how does the real CEO prove anything? How does any CEO?
Fortune magazine declared 2026 "the year you get fooled by a deepfake." Not might. Will. Their researchers specifically flagged voice cloning as the tipping point—it's crossed what they call the "indistinguishable threshold." Your mom's voice on the phone? It could be synthesized from a three-second sample.
Arizona's Attorney General issued a warning in February 2026 about AI deepfake video calls being used in romance scams. Not just audio—full video. People are falling in love over video calls, seeing faces, hearing laughter, building relationships. None of it real. Real victims. Real financial losses. Real heartbreak.
The Arms Race We're Losing
Some researchers argue we need "provenance-based trust"—documenting where every video came from, every step of the way. Chain of custody. Cryptographic verification. Expert testimony on metadata.
Sounds exhausting because it is. Most of the internet isn't built for it. Neither are our institutions.
Poynter reports that even with significant investment, deepfake detection systems are stuck in a constant arms race—and currently losing. Every time detection improves, generation improves faster. The fakers only need to win once. Detectors need to win every time.
And here's the darker punchline: even perfect detection doesn't solve the liar's dividend. The damage happens at the moment of denial, not the moment of verification.
What Actually Helps (For Now)
So what do we do? A few approaches that experts suggest:
Adopt "trust but verify" as your default. Video from a source you trust? Great. But seek corroborating evidence before sharing or acting on it. One source isn't enough anymore.
Learn the tells that still exist. Unnatural blinking patterns. Audio that doesn't sync with lip movements. Hair that moves too smoothly. Lighting inconsistencies between faces and backgrounds. These gaps are closing, but they're not gone.
Demand specifics when someone cries fake. "It's AI" isn't an explanation—it's an assertion. Who did the forensic analysis? When? What methodology? Real debunking leaves evidence. Convenient denials usually don't.
Trace the original source. Where did the video first appear? Who captured it? Has it been independently verified?
This is exhausting work. It shouldn't fall on ordinary people to authenticate every piece of media they encounter. But here we are.
The Question We Can't Avoid
The Brennan Center's research suggests that shrinking the liar's dividend requires public education campaigns that match the scale of the problem—billions in investment, not millions.
Because technology won't save us from this. Detection will keep improving. So will generation. What might actually save us is something older and less glamorous: critical thinking at scale. A population that knows how to evaluate evidence. That demands proof, not assertions.
For most of human history, eyewitness accounts were the best evidence we had. Then photography. Then video. Each advance seemed to bring us closer to objective truth.
Now we're watching that progress reverse. The camera that was supposed to never lie can now lie better than any human ever could.
So the next time someone shows you a video and asks "can you believe this?"—consider that the question might be more literal than they intended. The age of seeing is believing is ending. What we build in its place is the question that should keep all of us up at night.