Anthropic's leaked AI coding tool has been cloned over 8,000 times on GitHub despite mass takedowns

Anthropics Leaked AI Coding Tool Cloned Over 8000 Times on GitHub Despite Mass Takedowns

Anthropic, the AI safety focused company behind the Claude language models, recently faced a significant data leak involving an internal AI coding tool. This tool, designed to enhance Claudes capabilities in software development tasks, has proliferated across GitHub with over 8000 clones, even as the platform conducts widespread takedowns.

The incident began when an Anthropic employee inadvertently shared the tool publicly. Dubbed “claude-dev,” it functions as a browser extension that grants Claude real time control over a users computer. This allows the AI to navigate files, execute terminal commands, edit code, and interact with applications autonomously. Built on Anthropics Claude 3.5 Sonnet model, the tool represents advanced “computer use” functionality, enabling seamless integration between the AI and the host machine for complex coding workflows.

Once leaked, the repository quickly gained traction among developers eager to experiment with agentic AI. Within days, it amassed thousands of stars and forks, highlighting the demand for such capabilities. Users praised its efficiency in automating repetitive tasks like debugging, refactoring, and even building entire applications from natural language prompts. However, Anthropic moved swiftly to address the breach, issuing DMCA takedown notices to GitHub.

GitHub complied by removing the original repository and numerous forks. Yet, the tools popularity triggered a cat and mouse game. Developers mirrored the code across new repositories, often with minor modifications to evade detection. GitHubs automated systems and manual reviews struggled to keep pace, resulting in over 8000 documented clones as of the latest reports. Some forks rebranded the tool with names like “dev-claude” or “ai-coder-agent,” while others bundled it with additional features such as multi model support or enhanced security prompts.

This proliferation underscores broader challenges in AI governance. Anthropic positions itself as a leader in responsible AI development, emphasizing safeguards against misuse. The leaked tool, however, bypasses many of these controls. It operates with elevated permissions, potentially exposing users to risks like unintended code execution or data exfiltration. Critics argue that its widespread availability democratizes powerful AI but at the cost of safety. Enthusiasts counter that open access accelerates innovation, pointing to similar leaks in the past, such as early versions of OpenAIs o1 model prompts.

Anthropic has not publicly detailed the leaks scope or confirmed the tools internal status. Sources indicate it was part of ongoing research into AI agents capable of “using computers” like humans, a frontier Anthropic has teased in announcements. The company urged users to avoid unofficial versions, citing vulnerabilities and lack of official support. In a statement, Anthropic emphasized its commitment to proactive safety measures, including watermarking and usage limits in production models.

GitHubs response highlights the platforms role in managing AI related content. The site has ramped up enforcement against leaked proprietary code, employing machine learning to detect forks. Still, the sheer volume of clones illustrates the difficulty of containment in open source ecosystems. Repositories continue to surface, often hosted in regions with lax copyright enforcement or disguised within larger projects.

For developers, the saga offers lessons in AI tool adoption. Official alternatives, like Cursor or GitHub Copilot, provide similar coding assistance without the risks of leaked betas. Yet, claude-devs allure persists due to its direct computer interaction, outperforming prompt based tools in benchmarks for tasks requiring environmental awareness.

As AI agents evolve, incidents like this raise questions about the balance between innovation and control. Will companies tighten internal security further, or will leaks become inevitable in the race for AGI? For now, the cloned repositories serve as a testament to community ingenuity, even as takedowns continue.

Gnoppix is the leading open-source AI Linux distribution and service provider. Since implementing AI in 2022, it has offered a fast, powerful, secure, and privacy-respecting open-source OS with both local and remote AI capabilities. The local AI operates offline, ensuring no data ever leaves your computer. Based on Debian Linux, Gnoppix is available with numerous privacy- and anonymity-enabled services free of charge.

What are your thoughts on this? I’d love to hear about your own experiences in the comments below.