Internet Mythbusters

The Six-Finger Check Is Dead: AI Has Learned to Count

10:11 by The Investigator
AI detectiondeepfakesix fingersAI-generated imagesmisinformationfact-checkingdigital literacyMidjourneyDALL-Eimage verification

Show Notes

For years, counting fingers was the internet's go-to method for spotting AI-generated images—extra digits were a telltale sign of artificial creation. But as AI image generators have rapidly improved, this once-reliable detection method has become obsolete, leaving viewers without their most trusted tool for separating real from fake.

The Six-Finger Check Is Dead: AI Has Learned to Count

Your go-to deepfake detection trick stopped working in 2026—and most people haven't caught on yet.

You've done it. I've done it. We've all done it. Spotted a suspicious photo, zoomed in on the hands, and started counting. One, two, three, four, five—okay, maybe real. Six? Busted. Definitely AI.

For years, this was the internet's great equalizer. No fancy software required. No forensic training needed. Just your eyes, basic arithmetic, and the satisfying confidence of catching a fake in the wild. Gen Z turned it into gospel, TikTok spread it like wildfire, and suddenly everyone from your tech-savvy cousin to your skeptical uncle knew the secret: AI can't count to five.

Except now it can. And millions of people haven't gotten the memo.

Why AI Couldn't Count (And Why It Matters That It Learned)

Here's what was actually happening under the hood. When AI models train on millions of images, hands present a uniquely brutal challenge. They're usually tiny—just a fraction of any given photo. They appear at every conceivable angle. They're constantly obscured by objects, blurred by motion, or folded in ways that make finger-counting impossible even for humans.

The technical term is "relational reasoning." The AI has to understand not just what a finger looks like in isolation, but how fingers connect to palms, how they bend at joints, how shadows wrap around knuckles. For early models, this was simply too much. So they improvised—and sometimes invented extra digits.

The errors became so predictable they turned into a meme. Six-fingered pianists. Seven-fingered models waving at the camera. The finger count became internet common knowledge, spreading across every platform where people shared and questioned photos.

But here's the thing about publicly weaponizing a specific AI flaw: you've just handed developers a very clear target. Midjourney, DALL-E, Stable Diffusion—every major player started throwing engineering resources at hand generation. By 2023, the Washington Post was already reporting that the gap was closing fast. By 2026? It's closed.

When the Detection Method Backfires

This is where the story gets darker. The six-finger check didn't just become unreliable—it started causing a new kind of damage.

In January 2026, a video of Israeli Prime Minister Benjamin Netanyahu went viral. Thousands of people were absolutely convinced it was AI-generated. Why? They thought they spotted six fingers. "Look at the hands!" the posts declared. "Count the fingers!" And people did—smugly certain they'd caught a deepfake in action.

Snopes investigated. The video was completely real. Netanyahu's hands were completely normal. The "six fingers" were nothing more than motion blur and compression artifacts creating an optical illusion.

The detection method had backfired spectacularly. People were so primed to spot AI that they saw it where it didn't exist. Real content was being dismissed as fake—and that's a form of misinformation too. It undermines trust in legitimate evidence. It hands bad actors a ready-made playbook for dismissing inconvenient truths.

The Deepfakes That Fooled Everyone

Meanwhile, actual AI-generated content was quietly getting better. In March 2026, Senate Republicans released a video of Democratic candidate James Talarico making statements he never made. The deepfake was entirely artificial.

And here's the disturbing part: there were no obvious hand anomalies. No six fingers. No melted digits. Just a convincing fake that fooled viewers specifically because they were still relying on outdated detection methods. Count the fingers—looks fine. Must be real.

It wasn't.

Carnegie Mellon researchers have been sounding the alarm: six-fingered hands and asymmetrical eyes are no longer reliable red flags as of 2026. The detection landscape has fundamentally shifted. What worked three years ago is obsolete now. And the gap between AI capabilities and public awareness keeps widening.

What Actually Works Now

So where does that leave us? What can you actually do to verify images when the easy tricks no longer apply?

First—and this is critical—stop relying on any single visual check. The finger count was never a complete solution. It was always just one signal among many.

Look at multiple elements together. Check lighting consistency: does the shadow direction match across the whole image? Are reflections in eyes consistent with the claimed light sources? Examine background coherence—AI often struggles with scene continuity, producing furniture that doesn't quite connect to floors, patterns that shift illogically, architecture that defies physics.

But honestly? Visual inspection alone isn't enough anymore. Use reverse image search through TinEye, Google Images, or Bing. If an image is supposedly newsworthy, legitimate outlets should have sourced versions you can trace back to original photographers.

Check fact-checking sites before sharing. Snopes, PolitiFact, AFP Fact Check—these organizations have professional verification pipelines that go far beyond what your eyes can catch.

And here's something counterintuitive: be skeptical of your own skepticism. The Netanyahu incident proves that calling real content fake is just as damaging as believing fake content is real.

The Arms Race Never Ends

This is how misinformation evolves—not just in the creation of fake content, but in the obsolescence of our defenses. Every detection method eventually has a counter. The six-finger check got popular. Developers noticed. Now it's a solved problem for most major AI generators.

So what's the next detection method? What flaw will people discover and publicize, only to watch it get fixed in the next model update? That's the trap. Any publicly-known detection method becomes a target. The moment you teach millions of people to look for a specific flaw, you've drawn developers a roadmap.

The uncomfortable truth: there may not be a simple replacement for the six-finger check. Detection is becoming a job for specialized tools and trained analysts, not casual observation. For most of us, that means shifting our approach from "can I detect this myself?" to "can I verify this through trusted sources?"

It's less satisfying. But it's more reliable.

The finger count was a beautiful thing while it lasted—simple, democratic, available to everyone. A five-second check that could reveal a forgery. But that era is over. AI learned to count, and now we have to learn something harder: how to verify truth in a world where seeing is no longer believing.

The six-finger check is dead. But your skepticism doesn't have to be. It just needs better tools.

Download MP3