China’s National Computer Network Emergency Response Technical Team, CNCERT, has issued a warning regarding security risks associated with OpenClaw, an open source and self hosted autonomous artificial intelligence agent previously known as Clawdbot and Moltbot. Authorities noted that weaknesses in the platform’s default security settings combined with the elevated system privileges required for autonomous task execution could expose endpoints to exploitation. According to CNCERT, these conditions could allow threat actors to manipulate the AI agent and gain unauthorized control over systems, creating risks for data protection and operational security.
The warning highlights the growing concern around prompt injection attacks targeting AI powered agents. In this scenario, malicious instructions embedded within external content such as web pages can influence how the AI agent interprets information and performs tasks. When an agent is tricked into accessing manipulated content, it may inadvertently reveal confidential data or execute unintended commands. This technique is commonly described as indirect prompt injection or cross domain prompt injection because attackers manipulate the AI system indirectly rather than interacting with the language model itself. By weaponizing common AI features such as web page summarization or automated content analysis, adversaries can embed hidden instructions that alter outputs or trigger unintended actions. These attacks can be used for multiple purposes including bypassing AI driven ad review mechanisms, manipulating hiring decisions, poisoning search engine optimization results, and shaping biased responses through selective suppression of information.
OpenAI has also highlighted the evolving nature of prompt injection threats. In a recent blog post available at https://openai.com, the organization explained that AI agents are increasingly capable of browsing the web, retrieving information, and taking actions on behalf of users. While these capabilities provide strong productivity benefits, they also expand the potential attack surface. Security researchers have noted that attackers are combining prompt injection techniques with elements of social engineering to influence how AI agents behave when interacting with external data sources. The expanding functionality of agent based systems means that malicious inputs can lead to unintended outcomes when safeguards are insufficient.
Evidence suggests that the risks are not merely theoretical. Researchers at PromptArmor recently identified a technique that can turn the link preview feature in messaging applications such as Telegram or Discord into a data exfiltration pathway when interacting with OpenClaw. Through an indirect prompt injection, an attacker can manipulate the AI agent into generating a specially crafted URL that contains sensitive information embedded within query parameters. When the link is automatically rendered as a preview inside the messaging application, the preview process can transmit that data to the attacker controlled domain without requiring the user to click the link. According to the researchers, in systems that support automated link previews, data exposure can occur immediately after the AI agent generates the response.
CNCERT also outlined several additional risks connected to the platform. One concern is the possibility that OpenClaw could mistakenly delete critical files if it misinterprets user instructions while performing automated tasks. Another issue involves malicious skills that may be uploaded to repositories such as ClawHub. If users install these unverified extensions, they could run unauthorized commands or deploy malware within the system environment. Authorities also warned that recently disclosed vulnerabilities in OpenClaw could be exploited by attackers to compromise systems and extract sensitive information. For organizations operating in critical sectors such as finance and energy, such incidents could result in exposure of confidential business data, proprietary technology, or operational disruptions that affect core services.
To reduce these risks, CNCERT advised organizations to implement stronger network protections and ensure that OpenClaw’s default management port is not exposed to the public internet. The agency also recommended isolating the AI agent within containerized environments, avoiding the storage of credentials in plain text, downloading skills only from trusted sources, disabling automatic updates for third party extensions, and maintaining up to date software versions. These practices are intended to limit unauthorized access and reduce the likelihood of exploitation through vulnerable components.
The warning comes as Chinese authorities have reportedly moved to restrict the installation of OpenClaw AI applications on office computers used by state owned enterprises and government agencies. According to Bloomberg, these restrictions may also extend to the families of military personnel as part of broader efforts to manage potential security risks associated with autonomous AI tools. At the same time, cybersecurity researchers have observed threat actors attempting to exploit the popularity of OpenClaw by distributing malicious GitHub repositories disguised as installation packages. These repositories have been used to deliver information stealing malware including Atomic and Vidar Stealer as well as a Golang based proxy malware known as GhostSocks through ClickFix style installation instructions. Security firm Huntress reported that the campaign broadly targeted users attempting to install OpenClaw on Windows and macOS systems, with the malicious repository gaining visibility after appearing as a top rated suggestion in Bing’s AI search results for OpenClaw Windows.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.




