Zscaler ThreatLabz researchers have uncovered a campaign in which threat actors weaponised the OpenClaw open-source AI agent framework to distribute both the Remcos remote access trojan and GhostLoader, a cross-platform information stealer. The attackers published a deceptive “DeepSeek-Claw” skill within the OpenClaw ecosystem, embedding malicious installation instructions designed to trick autonomous AI agents or unsuspecting developers into executing hidden payloads under the guise of a legitimate DeepSeek integration. The campaign represents a significant escalation in the abuse of agentic AI workflows as an attack vector, exploiting the elevated system privileges and autonomous execution capabilities that AI agents are routinely granted within developer environments.
On Windows systems, the attack chain is initiated when a malicious PowerShell command, embedded in the skill’s SKILL.md instruction file — downloads and executes a remote Windows Installer package that deploys Remcos RAT. The package cleverly abuses a legitimate, digitally signed GoToMeeting executable from LogMeIn to sideload a malicious DLL through DLL search order hijacking, allowing execution to blend in with trusted processes. The in-memory shellcode loader then employs an extensive array of evasion techniques, including patching Event Tracing for Windows and the Antimalware Scan Interface to blind endpoint detection tools, anti-debugging checks via the Process Environment Block, sandbox detection through sleep-call timing analysis, and active scanning for analysis tools such as IDA Pro, Wireshark, and x64dbg before decrypting and executing the final Remcos RAT payload using the Tiny Encryption Algorithm in CBC mode.
For macOS and Linux environments, an alternate execution path delivers GhostLoader via a heavily obfuscated Node.js payload designed to harvest sensitive data from developer environments, demonstrating the campaign’s deliberate cross-platform ambition. Once active, Remcos RAT establishes an encrypted command-and-control channel over TCP, enabling persistent remote access and data exfiltration from compromised systems.
Zscaler’s findings are a reminder that AI agent frameworks are rapidly becoming high-value targets for threat actors, and organisations adopting agentic AI workflows must implement rigorous vetting of third-party skills and plugins, enforce strict permission boundaries for AI agents, and deploy behavioural monitoring capable of detecting autonomous execution of malicious instructions.