Claude Code, Gemini CLI & Copilot Hit by Prompt Injection
A new class of prompt-injection attacks, dubbed “Comment and Control,” successfully hijacks three of the most widely deployed AI agents on GitHub Actions, enabling the theft of API keys, GitHub tokens, and other repository secrets.
According to Aonan Guan, the attack exploits a fundamental architectural flaw: AI agents integrated into GitHub Actions ingest untrusted GitHub data such as PR titles, issue bodies, and comments as part of their operational context, and then execute actions based on that content.
Since GitHub Actions workflows fire automatically on pull_request, issues, and issue_comment Events, an attacker simply needs to open a PR or file an issue to trigger the agent, no victim interaction required.

Unlike classic indirect prompt injection, which is reactive, “Comment and Control” is proactive, using GitHub itself as the command-and-control channel with no external infrastructure needed.
Finding 1: Claude Code Security Review — PR Title to RCE
Anthropic’s Claude Code Security Review directly interpolates the PR title into the agent prompt without sanitization.
Because the Claude CLI is invoked without --disallow-tools or --allowed-tools, and the subprocess inherits all environment variables, a crafted PR title can instruct Claude to run env, ps auxeww, and other shell commands.
Extracted credentials, including ANTHROPIC_API_KEY and GITHUB_TOKEN were posted directly as PR comments and silently embedded in GitHub Actions logs.
Anthropic rated this CVSS 9.4 Critical, partially mitigated by blocking ps via --disallowed-tools, and awarded a $100 bounty via HackerOne (#3387969).
Finding 2: Gemini CLI Action — Issue Comment to API Key Leak
Google’s Gemini CLI Action includes the full issue title, body, and comments in the agent prompt. By injecting a fake “Trusted Content Section” into an issue comment, researchers overrode Gemini’s safety instructions.
They induced it to post the GEMINI_API_KEY as a public issue comment, visible to any repository visitor. Google’s VRP accepted the report (#1609699) and awarded a $1,337 bounty, with credits going to Neil Fendley, Zhengyu Liu, Senapati Diwangkara, Yinzhi Cao, and Aonan Guan.
Finding 3: GitHub Copilot Agent — Three Runtime Defenses Bypassed
The most technically sophisticated finding targeted GitHub Copilot Agent (SWE Agent), which includes three runtime-level protections: environment variable filtering, secret scanning, and a network firewall.
The attack used invisible HTML comments in the issue body, visible only in raw Markdown and not in GitHub’s rendered view, to hide malicious instructions from the victim who assigns the issue to Copilot.
The environment filter was bypassed because ps auxeww reads /proc/[pid]/environ of the parent Node.js and MCP server processes, which retain unfiltered credentials.
Secret scanning was defeated using base64 encoding, and the firewall was bypassed by exfiltrating data through a standard git push to a new PR branch, an allowed operation.
Four credentials were extracted: GITHUB_TOKEN, GITHUB_COPILOT_API_TOKEN, GITHUB_PERSONAL_ACCESS_TOKEN, and COPILOT_JOB_NONCE.
GitHub initially closed the report as “Informative,” but reversed course after researchers provided reverse-engineered source code evidence, according to Aonan Guan, ultimately awarding a $500 bounty (#3544297).

Affected Credentials & Attack Surfaces
| Agent | Injection Vector | Credentials Stolen | Bounty |
|---|---|---|---|
| Claude Code Security Review | PR title | ANTHROPIC_API_KEY, GITHUB_TOKEN | $100 |
| Gemini CLI Action | Issue comment | GEMINI_API_KEY | $1,337 |
| GitHub Copilot Agent | HTML comment in issue body | GITHUB_TOKEN, COPILOT_API_TOKEN, GITHUB_PAT, COPILOT_JOB_NONCE | $500 |
Mitigation
Researchers and vendors recommend a least-privilege, allowlist-only approach to AI agent deployments.
Organizations should restrict agent tools to only what is operationally required (e.g., using --allowed-tools to prevent bash execution for code review agents, limit secret scope to the minimum needed, and audit any GitHub issues previously assigned to Copilot for hidden HTML comment payloads.
Blocklist-based mitigations such as Anthropic’s ps block are insufficient, as cat /proc/*/environ achieves equivalent credential access.
The fundamental tension remains: AI agents require both production secrets and untrusted input to function, a conflict that no single patch resolves.
No Comment! Be the first one.