Security researchers have disclosed multiple vulnerabilities affecting widely used artificial intelligence frameworks LangChain and LangGraph, raising concerns about the exposure of sensitive enterprise data. The flaws, identified by Cyera researcher Vladimir Tokarev, could allow attackers to access filesystem data, environment secrets, and conversation histories if successfully exploited. These frameworks are commonly used to build applications powered by large language models, with LangGraph extending LangChain capabilities to support more advanced and non linear workflows. Their widespread adoption is evident from recent statistics, with LangChain, LangChain Core, and LangGraph collectively recording tens of millions of downloads within a single week on Python Package Index.
According to the findings, the vulnerabilities create multiple entry points that attackers could use to extract sensitive information from enterprise deployments. One of the identified issues, tracked as CVE 2026 34070, involves a path traversal flaw in LangChain that enables unauthorized access to arbitrary files through its prompt loading mechanism. By crafting a malicious prompt template, an attacker could bypass validation controls and retrieve data from the underlying filesystem. Another vulnerability, CVE 2025 68664, relates to insecure deserialization, allowing malicious inputs to be interpreted as serialized objects. This flaw can lead to the leakage of API keys and other environment secrets, posing a serious risk to application security. A third issue, CVE 2025 67644, affects LangGraph and introduces a SQL injection vulnerability within its SQLite checkpoint implementation, enabling attackers to manipulate database queries and potentially gain access to stored information.
Exploitation of these flaws could have significant consequences for organizations relying on these frameworks. Attackers may be able to access sensitive files such as Docker configurations, extract confidential data through prompt injection techniques, and retrieve conversation histories tied to critical workflows. Notably, one of the vulnerabilities associated with insecure deserialization had previously been disclosed under the name LangGrinch, highlighting ongoing concerns around the handling of untrusted data in AI driven environments. The researchers emphasize that these issues demonstrate how traditional security weaknesses continue to surface within modern AI infrastructure, exposing systems to risks that extend beyond typical application boundaries.
Patches have been released to address the vulnerabilities, with fixes available in updated versions of LangChain Core and LangGraph related components. Users are advised to upgrade to secure versions, including langchain core version 1.2.22 for the path traversal issue, versions 0.3.81 and 1.2.5 for the deserialization flaw, and langgraph checkpoint sqlite version 3.0.1 for the SQL injection vulnerability. The disclosure follows closely behind another critical issue affecting Langflow, which reportedly came under active exploitation within hours of being made public. Security experts note that such rapid exploitation underscores the need for immediate patching and proactive defense measures. They also highlight that LangChain operates within a broad ecosystem of interconnected libraries and dependencies, meaning that vulnerabilities in its core components can propagate across multiple applications and integrations, amplifying the overall impact on the AI software landscape.
Follow the SPIN IDG WhatsApp Channel for updates across the Smart Pakistan Insights Network covering all of Pakistan’s technology ecosystem.




