https://www.theregister.com/2024/10/24/anthropic_claude_model_can_use_computers
Anthropic, a leading AI research company, has unveiled a new version of its Claude large language model with a controversial twist: Claude 3.5 Sonnet can now directly interact with computers. While Anthropic touts this as a breakthrough that unlocks new applications, experts warn of potential security vulnerabilities.
Direct Computer Interaction:
- Claude 3.5 Sonnet can control a computer like a human user, performing tasks like opening applications, typing text, moving the mouse, and running commands.
- This functionality is achieved through “computer use tools” that provide basic functionalities like keyboard control, file management, and even basic scripting.
Security Experts Cautious:
- Despite Anthropic’s claims about the feature’s benefits, security researchers express concerns about potential misuse.
- One major concern is “prompt injection,” where malicious instructions embedded in websites or images could manipulate Claude into unintended actions, potentially leading to data breaches or malware downloads.
- Other potential problems include:
- Difficulty for the AI to distinguish between user instructions and external prompts.
- Inaccurate computer vision or tool selection by the AI.
- Challenges with tasks like scrolling or interacting with spreadsheets.
Real-World Risks:
- Security professionals fear cybercriminals could exploit these vulnerabilities to automate attacks, infecting more machines in less time.
- SocialProof Security CEO Rachel Tobac expressed her worry about the potential for malware downloads and credential theft at scale.
Anthropic’s Response:
- Anthropic acknowledges the safety risks associated with computer use and encourages developers to take precautions.
- However, the company hasn’t provided a timeline for addressing the identified vulnerabilities.
The Takeaway:
While Anthropic’s innovation in AI-computer interaction is undeniable, the lack of robust safety measures raises significant security concerns. Whether Anthropic can address these concerns and ensure responsible use of this technology remains to be seen.