Employees from Google DeepMind and OpenAI have issued an open letter calling for their companies to establish firm “red lines” on military AI applications, mirroring the responsible scaling policy adopted by rival Anthropic. The letter, signed by more than 100 workers as of its publication, urges Google and OpenAI to publicly commit to avoiding involvement in Pentagon surveillance programs and the development of autonomous weapons.
The initiative draws direct inspiration from Anthropic’s Responsible Scaling Policy (RSP), a framework that defines operational pauses and binding commitments when AI systems reach certain capability thresholds. Anthropic’s RSP categorizes risks into levels—such as low, medium, high, and critical—and mandates automated evals, safety measures, and deployment restrictions as models advance. Notably, it includes explicit red lines prohibiting the deployment of AI for surveillance in authoritarian regimes or for lethal autonomous weapons lacking human oversight.
In contrast, the signatories highlight Google and OpenAI’s deeper entanglements with U.S. military projects. Google has a history of collaboration through initiatives like Project Maven, a Pentagon program using AI for drone strike analysis, which sparked internal protests and employee departures in 2018. More recently, OpenAI amended its usage policies in early 2024 to permit military applications, leading to partnerships such as a $200 million deal with the U.S. Army for AI analytics tools. These moves, the letter argues, expose the companies to risks of enabling human rights abuses and escalating global conflicts without adequate safeguards.
The open letter demands three core commitments:
-
No AI for surveillance targeting individuals or groups without consent, particularly in support of U.S. government programs that could infringe on privacy or civil liberties.
-
No development or deployment of AI enabling lethal autonomous weapons, defined as systems that select and engage targets without human intervention.
-
Public adoption of Anthropic-style red lines, including transparent scaling plans with predefined capability thresholds, rigorous safety testing, and mechanisms to halt progress if risks outpace mitigations.
The signatories emphasize that these boundaries are not absolute pacifism but targeted limits on the most dangerous uses. They reference Anthropic’s model as proof that such policies are feasible for frontier AI developers. Anthropic CEO Dario Amodei has publicly endorsed similar restrictions, stating his company would refuse contracts violating these principles.
This employee push occurs amid intensifying debates over AI’s military role. The U.S. Department of Defense has accelerated AI adoption through its Chief Digital and Artificial Intelligence Office (CDAO), seeking advanced models for intelligence, logistics, and decision-making. Meanwhile, international efforts like the UN’s discussions on lethal autonomous weapons systems (LAWS) remain stalled, with no binding global treaty in sight.
Critics within the industry, including former OpenAI safety researchers, warn that profit-driven military contracts could prioritize speed over safety. The letter’s authors, who include engineers, researchers, and policy specialists, frame their stance as protecting both societal welfare and the companies’ reputations. They note that employee activism previously influenced corporate decisions, such as Google’s partial withdrawal from Maven.
Google and OpenAI have yet to respond formally to the letter. Google maintains that its AI principles guide defense work toward non-offensive applications, while OpenAI defends its policy shift as necessary for national security in a competitive landscape dominated by state-backed actors like China.
The campaign reflects broader tensions in AI ethics. As models like Google’s Gemini and OpenAI’s GPT series approach multimodal capabilities rivaling human experts, concerns mount over misuse in warfare. Anthropic’s RSP sets a benchmark by linking deployment to empirical safety evaluations, such as benchmarks for cyber offense, bioweapons design, and mass surveillance potential.
Supporters of the letter argue that voluntary red lines could foster trust, attract talent committed to beneficial AI, and preempt regulatory backlash. Opponents, however, contend that unilateral restrictions handicap Western AI firms against adversaries unburdened by such constraints.
The open letter is hosted on a dedicated site, inviting further signatures from Google DeepMind and OpenAI personnel. Its emergence underscores a rift between commercial imperatives and ethical imperatives in the race for artificial general intelligence.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.