https://opensourcemalware.com/blog/clawdbot-skills-ganked-your-crypto

Security researchers have uncovered a large-scale malicious campaign targeting users of OpenClaw, an open-source personal AI assistant that has undergone multiple name changes in recent weeks from ClawdBot to Moltbot and now OpenClaw. Over 230 malicious packages disguised as legitimate utility tools were published to the platform’s official registry, ClawHub, and GitHub between January 27th and February 1st, according to reports from OpenSourceMalware and Koi Security. The malicious skills, which function as plug-ins that extend the AI assistant’s capabilities, masquerade as cryptocurrency trading automation, financial utilities, and social media services while secretly delivering information-stealing malware payloads to unsuspecting users.

The attack methodology resembles ClickFix-style social engineering, with each malicious skill containing detailed documentation that instructs users to install a separate tool called AuthTool, which is falsely presented as a critical requirement for functionality. In reality, AuthTool serves as a malware delivery mechanism that deploys NovaStealer variants on macOS systems and executes password-protected archives on Windows machines. The stealer targets a wide range of sensitive information including cryptocurrency wallet keys and seed phrases, browser passwords, SSH credentials, API keys for exchanges, macOS Keychain data, cloud credentials, and environment configuration files. Security researchers have previously warned that hundreds of misconfigured OpenClaw admin interfaces are exposed on the public web, further amplifying the security risks associated with the platform.

OpenClaw has acknowledged on social media that the platform cannot adequately review the massive influx of skill submissions, placing the responsibility for security verification on users themselves. It is recommended to implement multiple protective layers when using OpenClaw, including running the AI assistant within isolated virtual machines, restricting system permissions, and securing remote access through measures such as port restrictions and traffic blocking to mitigate the inherent risks posed by the assistant’s deep system access capabilities.