https://www.safebreach.com/blog/invitation-is-all-you-need-hacking-gemini/

Security researchers from SafeBreach Labs have unveiled a sophisticated new attack vector called “Targeted Promptware” that enables attackers to remotely hijack Google’s Gemini AI assistant and control victims’ devices through nothing more than a malicious Google Calendar invitation. The research demonstrates how attackers can exploit Gemini’s integration with Google Workspace to perform a shocking range of malicious activities including geolocating victims, remotely controlling smart home appliances, video streaming victims through Zoom, deleting calendar events, and exfiltrating sensitive email data without any direct user interaction beyond accepting a calendar invite.

The attack exploits a technique called “context poisoning” where malicious instructions are embedded within calendar event titles that become part of Gemini’s processing context when users ask about their schedules. When victims query Gemini about their calendar events, the AI assistant retrieves and processes the malicious invitation data, unknowingly executing embedded commands that appear legitimate to the system. The researchers demonstrated multiple attack techniques including “Delayed Tool Invocation,” which allows attackers to trigger malicious actions in response to common user phrases like “thanks,” and tool chaining that enables lateral movement between different Gemini agents and external applications on victims’ mobile devices.

The most concerning aspect of these attacks is their ability to escape the boundaries of the Gemini application itself, leveraging Android Utilities and other integrated services to interact with victims’ physical environments and personal data. The researchers’ threat analysis revealed that 73% of identified Promptware threats pose High-Critical risk levels, requiring immediate mitigation measures. Google responded to the responsible disclosure in February 2025 by implementing multiple layered defenses including enhanced user confirmations for sensitive actions, robust URL handling with sanitisation policies, and advanced prompt injection detection using content classifiers. The research highlights the emerging security challenges facing AI-powered applications as they become more integrated with users’ digital and physical environments.