Why Is Everyone Suddenly Terrified of Their Own Technology?
Something strange is happening in the tech industry. This week alone, we've seen 70 civil rights organizations warn Meta about facial recognition in smart glasses, countries scrambling to ban social media for minors despite evidence it won't work, and a man allegedly throwing a Molotov cocktail at Sam Altman's house. The common thread? A growing sense that technology companies have built things they can't—or won't—control.
The Meta smart glasses situation is particularly telling. The technology exists. The hardware works. But dozens of organizations felt compelled to write a letter warning that it could "empower predators." This isn't hypothetical scaremongering—it's a recognition that once you put always-on facial recognition into consumer glasses, the genie doesn't go back in the bottle. Meta will likely proceed anyway, because the competitive pressure to ship outweighs the societal pressure to pause.
Meanwhile, the report on Australian children bypassing social media bans reveals the futility of trying to regulate technology after it's already woven into the fabric of daily life. Countries are passing laws to keep kids off platforms that are already deeply embedded in youth culture. It's legislative theater performed for anxious parents while the platforms continue operating exactly as designed.
Then there's the almost satirical detail that Meta is reportedly building an AI clone of Mark Zuckerberg to interact with employees. If your CEO needs a digital proxy to field questions about company strategy, perhaps the problem isn't employee access—it's organizational scale. But instead of reconsidering structure, we're automating leadership itself.
What ties these stories together is a profound lack of trust. Civil society doesn't trust Meta to deploy facial recognition responsibly. Governments don't trust social media platforms around minors. Someone clearly didn't trust Sam Altman enough to leave his house alone. And apparently, Meta employees can't get enough time with their actual CEO.
The tech industry has spent two decades moving fast and breaking things. Now we're in the aftermath, where the things that got broken include public trust, regulatory frameworks, and any shared understanding of acceptable risk. Companies continue shipping features that provoke immediate backlash, not because they're unaware of the concerns, but because the incentive structure rewards deployment over deliberation.
The robotics and AI sector should be paying attention. As foundation models get embedded in physical systems—from robotic guide dogs to autonomous vehicles—the same pattern is emerging. Deploy first, address concerns later, hope regulation arrives slowly. Tesla's Full Self-Driving approval in the Netherlands came with emphatic reminders that drivers remain responsible, a legal fig leaf over the reality that we're testing semi-autonomous systems on public roads.
The question isn't whether these technologies will continue advancing. They will. The question is whether the gap between what we can build and what we can responsibly deploy will keep widening until something breaks that can't be patched with an update.
Right now, that gap looks more like a chasm.