OpenAI Labels AI Pioneer Stuart Russell a Doomer in Legal Filing Despite CEO’s Prior Endorsement of His Extinction Risk Warnings
In a striking development within the ongoing copyright infringement lawsuit filed by The New York Times against OpenAI and Microsoft, OpenAI has characterized renowned AI researcher Stuart Russell as a doomer. This label appears in a recent court filing where the company challenges Russell’s expert testimony on behalf of the plaintiff. The irony is palpable: OpenAI CEO Sam Altman co-signed a high-profile statement from the Center for AI Safety (CAIS) in May 2023 that echoed Russell’s long-standing warnings about AI posing extinction-level risks to humanity.
The lawsuit, initiated in December 2023, accuses OpenAI of systematically scraping millions of Times articles to train its ChatGPT models without authorization or compensation. The Times alleges that these models now reproduce copyrighted content verbatim in responses to user queries, undermining journalistic integrity and revenue streams. OpenAI counters that such usage falls under fair use principles, essential for transformative AI technologies.
Russell, a professor at the University of California, Berkeley, and co-author of the seminal textbook Artificial Intelligence: A Modern Approach, submitted a declaration supporting the Times’ claims. He argued that OpenAI’s training processes directly incorporate Times content into the models’ latent knowledge structures. According to Russell, this enables ChatGPT to regurgitate exact passages, photographs, and even interactive features from Times articles when prompted cleverly. He likened the infringement to photocopying books en masse for redistribution, asserting it exceeds fair use boundaries.
OpenAI’s rebuttal filing, submitted on October 25, 2024, to the U.S. District Court for the Southern District of New York, dismisses Russell’s analysis as alarmist and methodologically flawed. The company portrays him as an outlier in the AI community, fixated on speculative doomsday scenarios rather than practical engineering realities. OpenAI lawyers contend that Russell’s experiments demonstrating regurgitation are contrived, relying on overly specific prompts unlikely in real-world use. They further claim his testimony ignores industry-standard mitigations like system prompts that instruct models to avoid direct copying.
A key contention is Russell’s expertise. OpenAI argues he lacks direct experience in training large language models (LLMs) at scale, positioning him as a theorist disconnected from deployment challenges. The filing highlights that Russell advocates for enforceable international treaties to regulate AI development, a stance OpenAI depicts as extreme and unfeasible. Notably, it quotes Russell’s past statements, such as his 2015 call to abandon goal-directed AI systems outright and his comparisons of advanced AI to a hypothetical alien invasion.
This portrayal clashes sharply with Altman’s own positions. In the CAIS statement, over 350 AI leaders, including Altman, declared mitigating AI extinction risks a global priority on par with pandemics or nuclear war. Altman reinforced this by tweeting his agreement and later testifying before Congress on AI safety needs. Russell has been a vocal proponent of these risks for decades, arguing in his book Human Compatible that misaligned superintelligent AI could pursue objectives catastrophically at odds with human survival.
OpenAI’s filing does not address this overlap directly. Instead, it emphasizes a supposed shift in consensus toward viewing extinction risks as low-probability distractions from nearer-term harms like bias or misuse. The company cites surveys where most AI researchers assign modest probabilities to catastrophic outcomes this century.
Legal observers note the doomer label as a rhetorical tactic to undermine Russell’s credibility. It fits a broader pattern in AI discourse where safety advocates are sometimes marginalized as fearmongers, even as companies like OpenAI invest heavily in safety teams. Russell’s declaration, spanning 22 pages with appendices of ChatGPT outputs, meticulously documents regurgitation across text, images, and code from Times properties.
The dispute underscores deeper tensions in AI litigation. Courts must weigh fair use factors: purpose of use (commercial vs. transformative), nature of the work (creative news content), amount copied (substantial portions), and market effect (potential displacement of subscriptions). Russell’s evidence bolsters the Times’ case on the third and fourth factors, while OpenAI stresses innovation benefits.
As the case progresses toward potential summary judgment motions, OpenAI’s attack on Russell highlights the stakes. Dismissing a foundational figure like him risks alienating safety-conscious stakeholders, especially given Altman’s past alignment. The contradiction may invite scrutiny during depositions or trial, where Altman’s signature on the CAIS letter could resurface.
This episode reflects the high-wire act of AI firms balancing rapid commercialization with safety rhetoric. For the Times, Russell’s involvement amplifies arguments that unchecked training practices erode creator rights, potentially reshaping how AI interacts with public knowledge corpora.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.