The Pentagon's AI Contracting Crisis: Why Defense Partnerships Keep Imploding
Something is fundamentally broken in how the Pentagon does business with AI companies. In the span of days, we've witnessed OpenAI's head of robotics resign over DoD partnership concerns, Anthropic threatening to sue over a supply chain risk designation, and revelations that the Pentagon has been secretly testing AI models for years. This isn't a coincidence—it's a pattern that exposes deep structural problems in defense AI contracting.
The traditional defense contracting model assumes long development cycles, classified requirements, and vendors willing to adapt their technology exclusively for military use. But AI companies operate on a fundamentally different paradigm. They build general-purpose models, iterate rapidly in public, and derive value from widespread civilian deployment. When the Pentagon tries to force these companies into legacy procurement frameworks—demanding use restrictions, accepting vague oversight language, or operating under supply chain risk designations—the partnerships collapse.
Caitlin Kalinowski's resignation from OpenAI is particularly revealing. Her stated concern wasn't that OpenAI was working with the military—it was that the company 'rushed into' the partnership without establishing 'proper guardrails against domestic surveillance and autonomous weapons.' This suggests the Pentagon isn't just asking for AI capabilities; it's asking for them without the constraints that AI companies believe are essential for responsible deployment. OpenAI's dispute of her characterization only reinforces the opacity problem: if senior technical leaders don't know what restrictions actually exist in DoD contracts, how can anyone trust the safeguards?
Anthropic's situation follows a different but equally troubling trajectory. Being labeled a supply chain risk—typically reserved for foreign adversaries—suggests the DoD is using procurement designations as leverage in contract negotiations. According to reports, talks have resumed, indicating this may be a pressure tactic rather than a genuine security determination. If true, it represents a dangerous precedent where the Pentagon can effectively blacklist American AI companies for refusing unfavorable contract terms.
The revelation that the Pentagon has been 'secretly testing' AI models adds another dimension. If testing happened without proper disclosure frameworks or partnership agreements, it suggests the DoD may be circumventing the very governance structures these companies are trying to establish. This creates a perverse incentive: companies that refuse defense contracts may still see their technology used militarily, while receiving none of the contractual protections or oversight mechanisms they're demanding.
The solution isn't for AI companies to simply refuse defense work, nor is it for the Pentagon to continue forcing square pegs into round holes. What's needed is a new category of AI-specific defense contracting that acknowledges the dual-use nature of foundation models while establishing clear, enforceable restrictions on domestic surveillance, autonomous weapons, and acceptable use cases. This framework should be public by default, with classified addendums only where genuinely necessary for national security.
Until the Pentagon develops contracting mechanisms suited to AI technology—rather than treating language models like fighter jets—these partnerships will continue to implode. The question isn't whether AI companies should work with defense. It's whether the defense establishment is willing to modernize its procurement approach to make such partnerships viable. Right now, the evidence suggests it isn't.