https://blog.checkpoint.com/research/check-point-researchers-expose-critical-claude-code-flaws
Cybersecurity researchers have uncovered multiple critical security flaws in Anthropic’s Claude Code, an AI-powered coding assistant, that could allow attackers to execute malicious code and steal API credentials. Check Point Research revealed that the vulnerabilities exploit configuration files, hooks, and protocol servers to trigger attacks when developers simply open untrusted code repositories. This discovery fundamentally changes the security landscape for AI development tools, where configuration files are no longer just settings but can now directly execute harmful operations.
The vulnerabilities span three main categories with high severity ratings. The first two flaws, both scoring 8.7 on the CVSS scale, allow attackers to inject and execute arbitrary code by bypassing user consent through malicious project configurations. When developers open a compromised repository, these vulnerabilities can trigger automatically without additional user interaction. The third flaw, CVE-2026-21852, enables attackers to steal Anthropic API keys and other sensitive data before users even see security warnings, potentially giving cybercriminals access to a developer’s entire AI infrastructure.
Anthropic has released patches for all three vulnerabilities between September 2025 and January 2026. However as AI tools become more autonomous and capable, the attack surface expands beyond traditional code to include configuration layers. Developers should be cautious when opening unfamiliar repositories and to ensure their Claude Code installations are updated to the latest versions. As in AI-powered development environments, even seemingly harmless actions like opening a project folder can pose significant security risks.