Center for Humane Technology Criticizes Trump’s AI Executive Order for Undermining Accountability
In a pointed critique, the Center for Humane Technology (CHT) has condemned President Donald Trump’s recent executive order on artificial intelligence, arguing that it establishes a dangerous “accountability vacuum” in the regulation of AI technologies. Issued on January 23, 2025, the order revokes key provisions from former President Joe Biden’s 2023 Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. CHT, a nonprofit organization dedicated to encouraging humane technology design, warns that this move prioritizes unchecked innovation over public safety, potentially exacerbating risks associated with powerful AI systems.
Trump’s executive order, titled “Removing Barriers to American Leadership in Artificial Intelligence,” explicitly rescinds Biden’s comprehensive framework, which mandated rigorous safety testing, cybersecurity measures, and the creation of an independent AI Safety Institute under the National Institute of Standards and Technology (NIST). Biden’s order required leading AI developers—those producing models with exceptional capabilities—to report serious incidents, adhere to red-teaming protocols for identifying vulnerabilities, and implement safeguards against biological threats and cyberattacks. It also directed federal agencies to develop standards for AI transparency, bias mitigation, and privacy protections.
By contrast, Trump’s directive frames prior regulations as “burdensome” obstacles to American competitiveness. It instructs federal agencies to review and rescind policies that could impede AI development, emphasizing deregulation to maintain U.S. dominance against global rivals like China. The order calls for the elimination of any requirements hindering “responsible AI innovation,” while promoting private-sector leadership without enforceable obligations. Agencies are directed to prioritize national security applications of AI and repeal measures seen as anti-competitive.
CHT’s response, articulated by Executive Director Randima Fernando, highlights the order’s failure to replace revoked safeguards with meaningful alternatives. “This executive order creates an accountability vacuum at precisely the moment when AI systems are scaling to unprecedented power,” Fernando stated. The organization argues that without mandatory safety reporting or testing, companies face no incentives to mitigate harms such as misinformation, autonomous weapons proliferation, or algorithmic discrimination. CHT points out that Biden’s order, while not perfect, established foundational guardrails, including the shutdown of the U.S. AI Safety Institute’s advisory board—a body comprising experts from industry, academia, and civil society tasked with independent oversight.
The critique underscores broader implications for AI governance. Under Biden’s regime, the order fostered collaboration between government and developers, culminating in voluntary commitments from companies like OpenAI, Google, and Anthropic to prioritize safety. Trump’s approach, CHT contends, signals a retreat from such partnerships, potentially emboldening a race-to-the-bottom dynamic where profit-driven deployment outpaces risk assessment. Fernando emphasized that true leadership requires balancing innovation with responsibility, noting that nations like the EU and UK have advanced binding AI regulations, leaving the U.S. at risk of falling behind in trustworthy AI standards.
CHT’s statement also references the timing of Trump’s order, coming amid escalating concerns over AI’s societal impacts. Recent incidents, such as AI-generated deepfakes influencing elections and chatbots producing harmful content, illustrate the urgency for accountability. Without federal mandates, reliance on self-regulation becomes untenable, especially as AI models grow more capable. The organization calls on Congress to enact legislation filling the void, advocating for requirements like impact assessments, whistleblower protections, and public audits of high-risk systems.
Defenders of the executive order, including tech industry advocates, applaud its focus on deregulation. They argue that Biden’s rules stifled innovation by imposing unnecessary bureaucracy, citing examples where safety reporting deterred investment. The White House has positioned the policy as pro-growth, aligning with Trump’s broader agenda to reduce federal overreach. However, CHT counters that innovation thrives under clear rules, drawing parallels to aviation and pharmaceuticals, where safety standards have spurred reliable progress.
As the dust settles on this policy shift, the debate intensifies over AI’s trajectory. CHT urges stakeholders—policymakers, developers, and users—to recognize the order’s risks and push for robust governance. The absence of accountability mechanisms, they warn, could amplify AI’s existential threats, from job displacement to catastrophic misuse, without commensurate benefits for society.
Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.
What are your thoughts on this? I’d love to hear about your own experiences in the comments below.