Anthropic has disclosed that part of the source code for its AI-powered coding assistant Claude Code was accidentally released due to human error. An internal-use file included in a software update contained references to an archive with nearly 2,000 files and approximately 500,000 lines of code. This content was quickly shared on developer platform GitHub, drawing significant attention from developers and industry observers.
A spokesperson for Anthropic confirmed that no sensitive customer information or credentials were involved in the incident. The company emphasized that this was a release packaging mistake and not a security breach. The exposed material primarily concerns the internal architecture of Claude Code and does not include confidential information about Claude, the underlying AI model by Anthropic.
The accidental release generated considerable traction online, with a post on X linking to the code surpassing 29 million views early on Wednesday. While the partial source code was unintended, Claude Code had previously been reverse-engineered by independent developers, and an earlier version of the assistant’s source code was exposed in February 2025. The latest incident highlights challenges in managing complex AI software and ensuring that internal code remains properly secured during updates and releases.
Industry experts note that such exposure, even without sensitive data, can have implications for competitive development and intellectual property management. The release offers developers an insight into the structure and design of Claude Code, but Anthropic has reiterated that it remains focused on maintaining the integrity and security of its AI models. The company is reviewing its internal processes to prevent similar incidents in future updates.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.





