https://www.404media.co/hacker-plants-computer-wiping-commands-in-amazons-ai-coding-agent

A significant security breach at Amazon Web Services exposed critical vulnerabilities in AI development workflows when a hacker successfully injected malicious code into Amazon Q Developer, the company’s popular AI coding assistant, through a simple GitHub pull request that was merged without proper oversight. The injected prompt instructed the AI agent to “clean a system to a near-factory state and delete file-system and cloud resources,” containing specific commands to wipe local directories including user home folders and execute destructive AWS CLI commands such as terminating EC2 instances, deleting S3 buckets, and removing IAM users. Amazon quietly pulled version 1.84.0 of the compromised extension from the Visual Studio Code Marketplace without issuing security advisories or notifications to users who had already downloaded the malicious version.

The incident highlights Amazon’s inadequate code review processes, as the hacker claimed they submitted the malicious pull request from a random GitHub account with no prior access or established contribution history, yet received what amounted to administrative privileges to modify production code. Amazon’s official response stated “Security is our top priority. We quickly mitigated an attempt to exploit a known issue,” acknowledging they were aware of the vulnerability before the breach occurred but failed to address it proactively. The company’s assertion that no customer resources were impacted relies heavily on the assumption that the malicious code wasn’t executed, despite the prompt being designed to log deletions to a local file that Amazon could not monitor on customer systems.

The breach represents a concerning trend of AI-powered tools becoming attractive targets for supply chain attacks, with the compromised extension capable of executing shell commands and accessing AWS credentials to destroy both local and cloud infrastructure. Security experts criticised Amazon’s handling of the incident, noting the lack of transparency in quietly removing the compromised version without proper disclosure, CVE assignment, or security bulletins to warn affected users. The incident shows the urgent need for enhanced security protocols around AI development tools that have privileged access to systems, particularly as these tools increasingly automate code execution and cloud resource management tasks that could cause catastrophic damage if compromised.