Snowflake Cortex AI Flaw Let Attackers Run Malware Outside Sandbox
A newly discovered vulnerability in Snowflake Cortex Code CLI, an AI-powered coding assistant used in the Snowflake ecosystem, allowed attackers to bypass built-in protections and execute malicious commands outside its sandbox environment.
The flaw involved an that manipulated the AI system into running dangerous shell commands without triggering the platform’s human approval mechanism. Security researchers demonstrated that the vulnerability could lead to remote code execution (RCE) and potentially allow attackers to access sensitive Snowflake databases.
The issue has since been fixed in version 1.0.25 of the Cortex Code CLI, released in late February 2026.
Prompt Injection Triggers AI to Run Malicious Commands
The vulnerability exploited how the AI agent processes instructions while exploring code repositories. When developers asked the assistant to analyze a third-party project, the system scanned repository files such as README documents.
Researchers showed that a hidden prompt injection embedded in the repository could trick the AI into executing a malicious shell command.
Because the AI interpreted the injected text as legitimate instructions, it attempted to run a command that downloaded and executed a script from a remote server.
The malicious command used a shell technique called process substitution, allowing harmful commands to hide inside an otherwise harmless command structure. Since the validation system only evaluated the outer command, the dangerous instructions were never flagged.
As a result, the AI executed the command without requesting user approval, bypassing the intended human-in-the-loop protection.
Sandbox Escape Enabled Full System Access
The attack became more dangerous due to a second weakness in the tool’s sandbox mechanism.
Under normal conditions, the AI coding assistant runs commands inside a restricted sandbox that limits file and network access. However, the prompt injection manipulated the AI agent into enabling a configuration flag that disables sandbox protections.
Once that flag was activated, the malicious command ran directly on the victim’s system rather than inside the restricted environment.
This allowed the script downloaded from the attacker’s server to execute freely on the developer’s machine.
Snowflake Databases Could Be Targeted
With remote code execution on a developer’s system, attackers could potentially leverage the victim’s existing connection to Snowflake infrastructure.
The malicious script could search for cached authentication tokens used by the CLI tool to connect to Snowflake databases. If found, the attacker could use those credentials to execute SQL queries with the victim’s privileges.
Possible impacts included:
| Potential Attack | Impact |
|---|---|
| Data exfiltration | Sensitive database records stolen |
| Table deletion | Entire datasets removed |
| Unauthorized access | Attackers create new database users |
| Account disruption | Legitimate users locked out |
Researchers demonstrated that the malware could retrieve database contents and even drop tables inside a Snowflake environment using the victim’s permissions.
AI Subagents Created Additional Risk
The testing also revealed another problem related to how the AI agent manages tasks.
The system sometimes launches multiple internal subagents to analyze files and complete complex tasks. In one scenario, a secondary agent executed the malicious command before the main agent reported back to the user.
Because context was lost between these agents, the system warned the user about a suspicious command only after the malicious action had already occurred.
This behavior highlights how agent-based AI workflows can create unexpected security gaps.
Patch Released for the Vulnerability
The issue was responsibly disclosed shortly after the release of Snowflake Cortex Code CLI. Developers released a fix in version 1.0.25, which addresses the command validation flaw and prevents the sandbox bypass.
The update is applied automatically when users launch the tool.
AI Coding Assistants Becoming a New Attack Surface
The incident highlights a growing security challenge surrounding AI-powered developer tools.
As coding assistants gain access to repositories, cloud infrastructure, and databases, attackers may increasingly target them through prompt injection attacks and malicious code repositories.
Security experts warn that organizations using AI coding agents should treat all external inputs, including repository files, documentation, and command outputs as untrusted data.
Without strong safeguards, AI systems capable of executing commands could become a new entry point for cyberattacks inside development environments.
No Comment! Be the first one.