You're watching a video of a politician saying something outrageous. A small label appears in the corner: "This video may be AI-generated." You read it. You understand it. And then you believe the video anyway.
That's not speculation. That's what three preregistered experiments published in Communications Psychology in 2025 actually found. Participants who were explicitly warned—before watching—that a video was fabricated still let that video shift their moral judgments about the person depicted. They knew it was fake. They understood it was fake. And their perceptions changed as if knowing made no difference at all.
Your Eyes Are Older Than Your Words
To understand why this happens, consider how differently your brain processes visual information versus text. When you read a warning label, your brain treats it as a claim—something to evaluate, accept, or reject through reasoning. But when you see a face move, lips form words, eyes convey emotion? That bypasses evaluation entirely.
Vision is ancient. Your visual cortex has been processing faces, movement, and threat detection for hundreds of millions of years. Reading is maybe five thousand years old. So when a warning label says "fake" but your eyes say "real"—the eyes usually win. Not because you're gullible. Because of how your nervous system is wired.
This is why courtroom lawyers fight so hard to show juries video evidence. Why advertisers pay millions for thirty seconds of moving images. Why propaganda has always privileged pictures over pamphlets. We believe our eyes in ways we simply don't believe words.
And now AI can generate what our eyes see.
The Experiment That Should Worry You
In the 2025 study, researchers ran three preregistered experiments testing whether transparency warnings could protect people from deepfake influence. Half the participants were warned beforehand that the videos were AI-generated. Half weren't. Then everyone made moral judgments about the person depicted—how trustworthy they seemed, how blameworthy, how much they deserved consequences.
Both groups—warned and unwarned—shifted their judgments based on what they saw. The warning made almost no difference. This held across two completely different deepfakes: one political, one non-political; one showing a serious crime, another showing a trivial moral slip-up.
The researchers' conclusion was stark: "Transparency is not enough to entirely negate the influence of AI-generated content."
Here's something that complicates matters further: a separate study in the Journal of Politics found that deepfake videos had a deception rate of 42 percent. But audio-only versions of the same fake information? 44 percent believed it. Text-only? Also 42 percent. Statistically, no difference in raw believability. But deepfakes may be more emotionally impactful, more memorable, harder to shake once you've seen them.
The Timing Problem
Researchers identified something called the continued influence effect. Once information gets encoded—especially vivid, emotional, visual information—it doesn't simply disappear when you learn it's false. It leaves a residue.
Think about a movie scene that scared you. Even knowing it was actors, special effects, a script—the fear was real. Your body reacted. That's because visual processing and emotional response happen faster than rational evaluation. By the time your conscious mind catches up, the reaction has already occurred.
Deepfakes exploit this timing gap. The image hits your emotional centers before your analytical mind can apply the warning. And once you've felt something about a person, that feeling becomes part of how you remember them—even if you later reason your way to a different conclusion.
There's another layer: motivated reasoning. Research shows that individuals are more likely to doubt a scandal's credibility if it reflects poorly on their own political party. You're most vulnerable to deepfakes that confirm what you already believe about the other side. And the uncomfortable part is that motivated reasoning doesn't feel like bias. It feels like critical thinking.
What Actually Works
If transparency labels fail, what succeeds? A 2025 study published in SAGE Journals found that exposure to journalistic fact-checks substantially reduced—and sometimes eliminated—the harmful effects of political deepfakes. Not labels. Not warnings. Active fact-checking.
The difference matters. A label is passive. You see it and move on. A fact-check engages your analytical mind—the part that can override visual intuition.
Another approach showing promise: inoculation. Researchers found that both text-based information and interactive games improved people's ability to spot AI-generated political media. The idea is to expose people to manipulation techniques before they encounter the real thing. It's like showing someone how a magic trick works—once you see the mechanism, you stop being amazed even when the performance is flawless.
The key insight is that passive warnings assume viewers will do the cognitive work themselves. But they usually don't. The work has to be done before the moment of exposure.
The Harder Path
So what can you actually do? Don't rely on labels alone—they're insufficient. When you encounter emotionally charged video content, especially content that confirms your existing beliefs, that's exactly when to slow down. That's the dangerous zone.
Seek out journalistic fact-checks rather than trusting platform labels. Learn about manipulation techniques proactively—there are interactive games and courses designed specifically for this purpose.
And perhaps most importantly: recognize that you're most vulnerable to fakes that confirm what you already believe. That outrageous video of the politician you despise? That's exactly the one to approach with the most skepticism—not because it's definitely fake, but because you want it to be real.
In a world where anyone can make you see anything, the only defense that actually works is taking control of what you choose to believe. Your eyes will believe what they see. Your emotions will respond. That's not a flaw you can will away. The question is what you do next—whether you pause, verify, and ask yourself the uncomfortable question: Do I want this to be true?