What Happens When the Computers Say No?

Somewhere right now, perfectly good food is sitting in a warehouse, going bad. Not because there aren't trucks to move it. Not because there aren't stores that need it. But because a computer said no.
The recent report on food waste in automated supply chains should be a wake-up call for anyone championing the efficiency gains of AI-driven logistics. When digital systems fail or get compromised by cyberattacks, food that could feed people simply cannot move through the distribution network. The humans know it needs to go. The trucks are ready. But the algorithm won't approve it, and increasingly, there's no manual override that matters.
This isn't an isolated problem. It's a pattern we're seeing across multiple industries as automation deepens its grip on critical infrastructure. When FedEx announces it's partnering with Berkshire Grey for warehouse robots rather than building proprietary systems, they're adding another layer of computational dependency. When DoorDash teams up with Rivian's spinoff Also for autonomous delivery vehicles, they're betting the future of food delivery on software that must work perfectly, every time.
The efficiency gains are real. Robots don't call in sick. AI doesn't take lunch breaks. Automated systems can optimize routes and inventory in ways human planners never could. But we're trading resilience for optimization, and we're doing it faster than we're building backup systems.
Consider the difference between old and new failure modes. When a human warehouse worker made a mistake twenty years ago, another human could spot it and fix it. When a truck driver encountered an unexpected road closure, they could reroute on the fly. These systems had slack, redundancy, and most importantly, human judgment distributed throughout.
Today's automated supply chains are breathtakingly efficient right up until something goes wrong. Then they're catastrophically brittle. A cyberattack doesn't just slow down food distribution—it stops it entirely. There's no warehouse manager who can say "override the system, ship it anyway." The computer is the manager now, and when it says no, thousands of pounds of food rot while people go hungry.
The Carnegie Mellon study on automation and blind users offers an important insight here. Researchers found that people don't want maximum automation—they want the ability to shift fluidly between autonomous assistance and manual control based on real-world conditions. That's exactly what we're losing in supply chain automation.
We need to start designing automated systems with graceful degradation in mind. Not just redundant servers and backup power, but actual pathways for human intervention when the algorithms fail. Manual overrides that aren't just theoretical but practical. Supply chains that can operate, even inefficiently, when the computers say no.
Because here's the thing: the computers will say no. Systems will fail. Networks will get attacked. And when they do, we need more than efficiency. We need food that can still reach people, packages that can still move, and critical infrastructure that doesn't grind to a complete halt because an algorithm encountered an edge case it wasn't trained on.
The question isn't whether to automate. That ship has sailed. The question is whether we're willing to sacrifice some of that optimization to build systems that can actually survive their own failures.