https://invariantlabs.ai/blog/mcp-github-vulnerability

Cybersecurity researchers at Invariant Labs have discovered a critical vulnerability in the widely-used GitHub Model Context Protocol (MCP) integration that could allow attackers to steal sensitive data from private repositories. The vulnerability, affecting the GitHub MCP server which has garnered over 14,000 stars on GitHub, enables malicious actors to hijack user agents through crafted GitHub issues and coerce them into leaking confidential information from private repositories.

The attack leverages what researchers call “toxic agent flows,” where an agent is manipulated into performing unintended actions through indirect prompt injection. In this scenario, an attacker creates a malicious issue in a publicly accessible repository that contains a hidden prompt injection payload. When a user instructs their AI agent to review open issues in the public repository, the agent encounters the malicious content and can be coerced into accessing private repository data and leaking it through automatically-generated pull requests in the public repository.

Invariant Labs demonstrated the vulnerability using Claude 4 Opus connected to the GitHub MCP server, showing how the attack successfully exfiltrated private information including repository details, personal plans, and even salary information. The vulnerability is particularly concerning because it affects any agent using the GitHub MCP server, regardless of the underlying AI model or implementation, and cannot be resolved through server-side patches alone since it represents a fundamental architectural issue.

To mitigate these risks, security experts recommend implementing granular permission controls that limit agent access to only necessary repositories, following the principle of least privilege. Additionally, organisations should deploy continuous security monitoring solutions and specialised scanners to detect potential exploitation attempts in real-time. The discovery highlights a broader security challenge as the industry rapidly deploys coding agents and AI-powered development tools, emphasizing the need for system-level security measures that complement model-level safeguards.