Banks and Workplaces Are Handing Over Decisions You Didn't Know You Were Making

Creative Robotics
Banks and Workplaces Are Handing Over Decisions You Didn't Know You Were Making

There's a peculiar dissonance in how we discuss automation. We scrutinize every Tesla robotaxi intervention, debate the ethics of delivery robots in bike lanes, and worry about factory jobs disappearing to industrial arms. Meanwhile, AI agents are quietly assuming control over some of our most consequential daily interactions—our money, our work patterns, our professional communications—and hardly anyone seems to notice.

Gradient Labs just deployed AI account managers to handle banking support workflows. Slack's new Slackbot can now analyze how you work, automate tasks, and integrate with CRM systems. These aren't speculative futures or pilot programs. They're live, they're processing real transactions and real workplace data, and they're making decisions that directly affect people's financial lives and professional evaluations.

The contrast with physical automation is striking. When Tesla admitted its robotaxis sometimes require human remote operators, it made headlines and sparked debate about transparency in autonomous systems. But when your bank assigns you an AI account manager powered by multiple GPT variants optimized for "low-latency, reliable automation," there's no equivalent scrutiny. No one's asking whether you consented to having an AI analyze your spending patterns and make support decisions on your behalf.

This isn't an argument against these tools. The efficiency gains are real, and many users will genuinely benefit. The problem is the asymmetry of awareness. We've built robust public discourse around physical robots—who's liable when a delivery bot blocks a sidewalk, whether remote operators in autonomous vehicles constitute false advertising, how factory automation affects employment. But cognitive automation in white-collar contexts happens in the background, embedded in services we already use, often without meaningful disclosure or opt-out mechanisms.

Consider what Slack's new capabilities actually mean. An AI is watching how you work, identifying patterns in your behavior, and creating automations based on that analysis. In a different context, we'd call this surveillance. But because it's framed as a productivity enhancement and buried in a software update, it bypasses the alarm bells that would ring if your employer installed cameras to study your physical movements.

The Carnegie Mellon study on robotic guide systems for blind users offers an instructive counterpoint. It found that users don't want pure autonomy—they want to shift fluidly between autonomous assistance and manual control based on context. That finding should inform how we design AI agents in banking, workplace software, and other cognitive domains. But instead, we're racing toward "proactive" modes where AI acts without prompts, making decisions before users even realize decisions need to be made.

The leak suggesting Anthropic is developing such a proactive mode for Claude Code is telling. The direction of travel is clear: less user involvement, more automated decision-making, fewer moments where humans are even aware that agency is being exercised on their behalf.

We need the same transparency standards for cognitive automation that we're developing for physical robots. If an AI is managing your bank account, you should know exactly what decisions it's making and have meaningful control over its authority. If workplace software is analyzing your behavior patterns, you should be able to see what it's learning and opt out without professional penalty. The fact that these systems operate in digital space rather than physical space doesn't make them any less consequential—it just makes them easier to overlook.