https://tracebit.com/blog/code-exec-deception-gemini-ai-cli-hijack
Security researchers at Tracebit have discovered a significant vulnerability in Google’s newly released Gemini CLI AI coding assistant that allowed attackers to execute malicious commands and steal data from developers’ computers without detection. The flaw was reported to Google on June 27, with the tech giant releasing a fix in version 0.1.14 on July 25, just one month after the tool’s initial public release on June 25, 2025.
The exploit works by manipulating Gemini CLI’s processing of context files, specifically README.md and GEMINI.md files, which are automatically read to help the AI understand codebases. Attackers can embed malicious instructions in these files to perform prompt injection attacks, exploiting poor command parsing and allow-list handling mechanisms. Tracebit demonstrated the attack by creating a repository with a poisoned README.md file that tricks Gemini into running what appears to be a benign grep command, but actually executes hidden data exfiltration commands. The vulnerability is particularly dangerous because it can hide malicious commands using visual manipulation with whitespace, making detection by users extremely difficult.
The discovery highlights growing security concerns around AI-powered development tools that can execute code automatically. While Tracebit tested similar attack methods against other AI coding assistants like OpenAI Codex and Anthropic Claude, those platforms proved resistant due to more robust allow-listing mechanisms. Security experts recommend that Gemini CLI users immediately upgrade to version 0.1.14 and avoid running the tool against untrusted codebases, or only do so in properly sandboxed environments to prevent potential compromise of sensitive development systems.