## AI Coding Assistants Face Credential-Theft Onslaught: Nine Months of Exploits Target Claude Code, Copilot, and Codex Authentication Layer
A coordinated string of security disclosures has exposed a systemic vulnerability across major AI coding assistants: attackers are consistently bypassing the models themselves and targeting the credentials these tools hold. Over nine months, six research teams documented exploits against Codex, Claude Code, Copilot, and Vertex AI, with every attack following the same pattern—hijacking the authentication layer that lets AI agents execute actions in production environments without human session anchoring.

On March 30, BeyondTrust demonstrated that a crafted GitHub branch name could steal Codex's OAuth token in cleartext, a finding OpenAI classified as Critical P1. Within 48 hours, Anthropic's Claude Code source code had spilled onto the public npm registry. Researchers at Adversa quickly discovered that Claude Code silently ignored its own deny rules once a command exceeded 50 subcommands—a flaw that could allow privilege escalation through recursive execution. These were not isolated incidents. They represent the continuation of an attack methodology first showcased at Black Hat USA 2025, when Zenity CTO Michael Bargury hijacked ChatGPT, Microsoft Copilot Studio, Google Gemini, Salesforce Einstein, and Cursor on stage using Jira MCP, all without a single user click.

The implications extend beyond individual tool compromise. AI coding assistants increasingly function as privileged operators in developer pipelines, holding tokens, executing deployments, and authenticating to cloud infrastructure. When these agents lack human session anchoring, a single credential theft can cascade into supply chain access, unauthorized code commits, or production system compromise. The consistent targeting of credentials rather than model weights suggests attackers recognize that the most direct path to organizational infrastructure runs through the authentication layer these tools inherit from their users. Security researchers warn that without architectural changes to how AI agents handle sessions and privilege boundaries, the attack surface will continue to expand as adoption grows.
---
- **Source**: VentureBeat
- **Sector**: The Lab
- **Tags**: AI security, credential theft, OAuth vulnerabilities, Claude Code, Microsoft Copilot
- **Credibility**: unverified
- **Published**: 2026-04-30 17:24:11
- **ID**: 78669
- **URL**: https://whisperx.ai/en/intel/78669