## VS Code Copilot Chat Vulnerability: GPT Prompt Injection Bypasses Sensitive File Protections
A critical security flaw in Microsoft's VS Code Copilot Chat extension allowed attackers to bypass its core 'sensitive file' approval mechanism, potentially leading to remote code execution. The vulnerability, present in versions 0.37.2 and earlier, centers on the `apply_patch` function. An attacker could use a prompt-injected agent powered by a GPT-family model to manipulate the system, tricking it into applying patches to files it was designed to protect from unauthorized modification. This bypass effectively nullified a key security control within the AI-powered coding assistant.

The vulnerability, tracked as CVE-2026-21523, was patched in VS Code Copilot Chat version 0.37.3. The fix specifically addresses the validation logic for the `apply_patch` input, closing the loophole that allowed the bypass. Microsoft has published a security advisory (GHSA-w79r-pmq3-8v4f) detailing the issue. As a temporary workaround, users of affected versions were advised to avoid using GPT models on agent sessions that could have been exposed to prompt injection attacks.

This incident highlights the evolving attack surface where AI assistants intersect with core development tools. The flaw specifically targeted the trust boundary between the AI's suggested actions and the IDE's file system safeguards. It underscores the security risks inherent in AI agents that can perform file operations, placing increased scrutiny on how these systems validate user intent and enforce security policies. Developers and organizations relying on AI coding assistants must ensure they are running the latest patched versions to mitigate such risks.
---
- **Source**: GitHub Issues
- **Sector**: The Lab
- **Tags**: vulnerability, VS Code, Copilot, AI Security, CVE-2026-21523
- **Credibility**: unverified
- **Published**: 2026-03-28 00:27:02
- **ID**: 38578
- **URL**: https://whisperx.ai/en/intel/38578