Anthropic CEO Labels OpenAI’s Pentagon Deal as Safety Theater Amid Investor Push for De-escalation
In a sharp escalation of the rivalry between leading AI companies, Anthropic CEO Dario Amodei has publicly criticized OpenAI’s recent partnership with the Pentagon, dismissing it as “safety theater.” The remarks, made via a post on X (formerly Twitter), come at a time when OpenAI is deepening ties with the U.S. Department of Defense, prompting concerns over the alignment of AI safety commitments with military applications. Investors with stakes in both firms are now working behind the scenes to temper the growing tensions.
OpenAI’s deal involves providing access to its advanced AI models, including GPT-4o, to Pentagon personnel through a special “defense” version of ChatGPT Enterprise. This arrangement, valued at potentially hundreds of millions of dollars, marks a significant pivot for OpenAI, which in its founding charter explicitly prohibited the use of its technology for weapons development. The company has since relaxed those restrictions, arguing that AI can enhance national security without crossing ethical lines. OpenAI CEO Sam Altman has defended the partnership, stating that it includes strict safeguards to prevent harmful applications and that excluding the U.S. military from AI advancements would be irresponsible given global competition from adversaries like China.
Amodei’s critique strikes at the heart of this shift. “This is safety theater,” he wrote, implying that OpenAI’s measures are superficial and fail to address genuine risks associated with deploying powerful AI in military contexts. Anthropic, a direct competitor founded by former OpenAI executives including Amodei, has positioned itself as a safety-first organization. It emphasizes constitutional AI principles, where models are trained to follow a predefined set of values, and has secured commitments from the U.S. government for responsible AI deployment. Amodei’s comments highlight a broader philosophical divide: while OpenAI embraces broader commercialization, Anthropic advocates for caution, particularly in high-stakes domains like defense.
The feud is not isolated. It unfolds against a backdrop of intensifying competition in the AI sector, where both companies vie for talent, funding, and frontier model supremacy. OpenAI’s Microsoft-backed ascent has been meteoric, but recent internal upheavals, including Altman’s brief ouster in 2023, have fueled perceptions of inconsistency in its safety governance. Anthropic, buoyed by investments from Amazon and Google, has raised over $8 billion, positioning itself as the principled alternative. Amodei’s attack amplifies these narratives, potentially swaying customers, policymakers, and recruits wary of OpenAI’s direction.
Investors are caught in the crossfire. Firms like Thrive Capital, which holds significant positions in both OpenAI and Anthropic, are reportedly urging restraint. Sources familiar with the matter indicate private conversations aimed at de-escalation, emphasizing that public spats undermine industry progress and investor confidence. The AI market, already volatile with trillion-dollar valuations, cannot afford fractured leadership, especially as regulatory scrutiny intensifies. The Biden administration’s AI safety executive order and ongoing congressional hearings underscore the need for unity on risk mitigation.
OpenAI has not directly responded to Amodei’s salvo, but company spokespeople reiterate their layered safety approach: model cards detailing capabilities and risks, red-teaming exercises, and deployment monitoring. The Pentagon deal, formalized under the Chief Digital and Artificial Intelligence Office (CDAO), is framed as administrative support rather than frontline combat tools. It allows DoD users to query classified data securely, with outputs confined to approved environments. Critics, including Amodei, question whether such controls can scale against the dual-use nature of foundation models, which excel at code generation, planning, and simulation - skills transferable to autonomous systems.
This episode reveals fault lines in AI governance. Early promises of altruism have given way to pragmatic realpolitik, as labs balance profit motives with existential risk pledges. Anthropic’s stance resonates with safety advocates like the Center for AI Safety, which warned of AI as a catastrophe risk comparable to pandemics or nuclear war. OpenAI counters that proactive engagement with governments ensures democratic values prevail over authoritarian alternatives.
As the dust settles, the implications extend beyond corporate rivalry. The Pentagon’s embrace of commercial AI signals a new era of defense innovation, potentially accelerating capabilities in logistics, intelligence analysis, and cybersecurity. Yet it raises thorny questions: Can safety assurances hold in classified settings? Will investor pressure foster collaboration or entrench divisions? For now, Amodei’s pointed rebuke has ignited debate, forcing the industry to confront whether its safety rhetoric matches its actions.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.