Pentagon’s AI Partnerships Hinge on a Simple Policy Phrase: All Lawful Use
The recent tensions between the United States Department of Defense, OpenAI, and Anthropic reveal a pivotal divide in the AI industry, centered on three critical words in OpenAI’s updated terms of service: “all lawful use.” This phrase has enabled the Pentagon to secure direct access to advanced AI models while leaving Anthropic on the sidelines, highlighting contrasting visions for AI’s role in national security.
In early 2024, OpenAI revised its usage policies, lifting previous restrictions that prohibited military and weapons development applications. The new language explicitly states that customers may use OpenAI’s services for “all lawful use,” including by defense and intelligence organizations. This shift came after months of internal debate and external pressure, allowing the company to serve government clients without violating broader ethical boundaries. Previously, OpenAI’s terms barred activities that could lead to harm or relate to military uses, a stance rooted in the company’s founding principles to benefit humanity.
The policy change paved the way for a landmark deal. On June 22, 2024, the Pentagon’s Chief Digital and Artificial Intelligence Office (CDAO) announced it had selected OpenAI to provide ChatGPT Enterprise to 10,000 users across the Defense Innovation Unit. This marks the first known direct contract between the Department of Defense and OpenAI, bypassing intermediaries like Microsoft. The agreement focuses on administrative tasks such as summarizing reports, drafting emails, and analyzing logistics data, all under strict oversight to ensure compliance with lawful purposes. Pentagon officials emphasized that the tools will not be used for classified intelligence or weapons development, aligning with OpenAI’s guardrails.
This development contrasts sharply with Anthropic’s position. Founded by former OpenAI executives including CEO Dario Amodei, Anthropic maintains a robust constitution that prohibits uses harmful to national security or involving weapons proliferation. The company’s Claude models are available only to approved customers, and military applications are explicitly off-limits. Anthropic has publicly committed to “Constitutional AI,” embedding safety principles that prioritize long-term human flourishing over short-term commercial gains. As a result, the Pentagon turned elsewhere after Anthropic declined to participate in similar initiatives.
The fallout underscores broader industry rifts. OpenAI’s pivot reflects pragmatic adaptation amid competition from rivals like Google and Meta, who already serve defense clients. CEO Sam Altman has argued that responsible AI development requires engagement with governments to shape policies proactively. In a May 2024 World Economic Forum interview, Altman noted that excluding the U.S. military could cede ground to adversaries like China, who face no such self-imposed limits.
Critics, however, decry the move as a betrayal of AI safety ideals. Max Tegmark, founder of the Future of Life Institute, warned that normalizing military AI use risks accelerating an arms race. OpenAI board member Helen Toner resigned in November 2023 partly over concerns about the company’s direction, including insufficient safety measures. Anthropic’s steadfast refusal bolsters its reputation among safety advocates but limits its market reach; the company relies heavily on Amazon’s investment and cloud infrastructure, which indirectly supports defense via AWS contracts.
Legal and technical safeguards underpin OpenAI’s “all lawful use” framework. Enterprise deployments include data isolation, preventing model training on user inputs, and content filters to block sensitive topics. The Pentagon’s contract incorporates additional reviews by the Defense Counterintelligence and Security Agency. Yet, ambiguities persist: what constitutes “lawful” evolves with laws and executive orders, such as Biden’s AI safety directive mandating risk assessments for dual-use technologies.
This episode also spotlights the U.S. government’s aggressive AI procurement strategy. The National Security Commission on Artificial Intelligence recommended in 2021 that the Defense Department invest billions in AI, prompting initiatives like the CDAO’s $100 million Replicator program for autonomous systems. Partnerships with Palantir, Anduril, and Scale AI complement the OpenAI deal, forming a diversified ecosystem.
Anthropic’s exclusion may prove temporary. CEO Amodei has hinted at potential flexibility for defensive cyber tools, but no formal changes have occurred. Meanwhile, OpenAI faces scrutiny from lawmakers; a bipartisan Senate letter in May 2024 urged transparency on military engagements. Internationally, the EU’s AI Act classifies high-risk military uses, potentially influencing U.S. firms.
The “all lawful use” clause thus serves as a litmus test for AI governance. It balances innovation with caution, enabling bureaucratic efficiencies while deferring harder ethical questions to regulators. For the Pentagon, it delivers immediate capabilities to counter peer competitors. For Anthropic, it reinforces a principled stand amid mounting pressures. As AI permeates defense, this three-word pivot will shape alliances, investments, and the global race for supremacy.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.