When Did Eye Scans Become the Solution to Everything?

Creative Robotics
When Did Eye Scans Become the Solution to Everything?

Sam Altman's Tools for Humanity wants to scan your eyeballs to prove you're human enough to buy concert tickets. The company's new Concert Kit feature uses World ID — its iris-scanning verification system — to combat ticket scalping, and it's expanding integrations to platforms including Tinder. Meanwhile, Anthropic has quietly begun requiring identity verification for Claude users "for a few use cases" it hasn't bothered to specify.

Something curious is happening in tech: biometric verification is being positioned as the obvious solution to an increasingly wide array of problems. Ticket scalping? Eye scans. Bot accounts? Eye scans. AI-generated content concerns? Apparently, eye scans too.

The framing is always similar: we're told these systems protect us from fraud, bots, or bad actors. The companies deploying them emphasize convenience and security. What's rarely discussed is how quickly we're normalizing extraordinarily invasive surveillance infrastructure for relatively mundane purposes.

Consider the progression. Tools for Humanity started with the premise that we need to verify humans in an AI-saturated world — a reasonable concern. But the jump from "proving you're human" to "proving you're human to buy Taylor Swift tickets" reveals how quickly mission creep sets in. Today it's concert access. Tomorrow it might be job applications, housing rentals, or any transaction where someone decides verification would be "safer."

The pattern is particularly troubling because it's happening across multiple companies simultaneously. Anthropic's vague identity verification requirements, arriving the same week as Tools for Humanity's concert announcement, suggests this isn't coordinated — it's convergent evolution. Different companies are independently deciding that biometric verification solves their particular problem, without much public debate about whether we want to live in a world where our biological data becomes the universal key.

The technical argument for these systems is straightforward: biometrics are harder to fake than passwords or even two-factor authentication. The social argument is more complex. We're being asked to trade permanent biological identifiers for temporary conveniences. Your iris scan doesn't change; your need to verify your humanity for a concert ticket is fleeting. But once that scan enters a database, it persists.

What's striking is how little resistance these rollouts are encountering. Perhaps it's because we've already surrendered so much: our faces unlock our phones, our fingerprints authorize payments, our voices command our homes. Each concession makes the next feel incremental rather than alarming.

But there's a meaningful difference between biometric authentication (using your fingerprint to unlock your own device) and biometric identification (scanning your iris to prove your identity to a third party database). The former is a personal security choice. The latter is participating in a surveillance system.

The companies building these tools aren't necessarily malicious. They're responding to real problems — bots do manipulate ticket sales, AI does make identity verification harder, fraud does exist. But they're also building infrastructure that, once normalized, becomes extremely difficult to roll back or refuse.

We should be having much louder conversations about whether scanning human eyeballs is really the best we can do. About what happens to that data. About who controls it. About whether the convenience of skipping a CAPTCHA is worth the trade-off. And about what it means when opting out becomes practically impossible.

Because once eye scans become the default answer to every verification challenge, we won't be deciding whether to use them. We'll be deciding whether to participate in society at all.