We're Teaching AI to Talk Like Humans — But Should We?
There's a curious disconnect happening in how we talk about artificial intelligence. According to researchers at Iowa State University who analyzed news coverage, journalists are notably careful about avoiding anthropomorphic language when describing AI systems. They shy away from words like "think," "know," or "understand." But in everyday conversation? We're not nearly as cautious.
This matters more than it might seem. The same week this research emerged, we saw Anthropic roll out identity verification for Claude users, Google launch a native Gemini app that can "analyze" your screen content, and Opera introduce browser integration that lets AI chatbots "draw context" from your tabs. The language in these announcements — and in our casual discussions about them — subtly suggests these tools possess understanding they simply don't have.
The gap between journalistic precision and conversational shorthand reveals something important about our relationship with AI. When we say an AI "knows" something or "thinks" about a problem, we're not just being imprecise. We're fundamentally misrepresenting how these systems work. Large language models don't know anything in any meaningful sense. They predict probable token sequences based on statistical patterns. That's it.
Yet the industry actively encourages this confusion. Marketing materials describe AI assistants that "understand your needs" and "learn your preferences." OpenAI's new GPT-Rosalind is pitched as a model that can "reason" about drug discovery. These aren't accidental word choices — they're deliberate framings that make AI sound more capable and autonomous than it is.
The consequences extend beyond mere semantics. When we attribute human-like cognition to AI systems, we create unrealistic expectations about their capabilities and limitations. Users trust AI-generated medical advice because they believe the system "knows" medicine. Employers replace workers because they think AI "understands" the job. Regulators struggle to create appropriate frameworks because the technology is described in terms that suggest consciousness or agency.
Perhaps most troubling is how this linguistic sleight-of-hand obscures accountability. When an AI system makes a mistake, anthropomorphic language lets companies frame it as the AI "learning" or "misunderstanding" rather than as a predictable failure of a statistical system. It's harder to hold anyone responsible when the system is described as an independent agent rather than a tool built by humans with specific training data and optimization targets.
The Iowa State research suggests journalists have developed antibodies against this linguistic trap. But as AI assistants become more conversational and integrated into our daily workflows — accessing our emails, managing our files, analyzing our photos — the pressure to speak about them in human terms will only intensify.
We need to resist that pressure. Not because precision is some abstract virtue, but because the words we use shape how we think about these systems, how we regulate them, and how we hold their creators accountable. An AI doesn't "think" your resume needs work. It identified statistical patterns in text. It doesn't "know" you're more likely to click certain ads. It optimized for engagement metrics.
The difference matters. And we should talk like it does.