Critical flaws found in AI development tools are dubbed an 'IDEsaster' — data theft and remote code execution possible
New research identifies more than thirty vulnerabilities across AI coding tools, revealing a universal attack chain that affects every major AI-integrated IDE tested.
A six-month investigation into AI-assisted development tools has uncovered over thirty security vulnerabilities that allow data exfiltration and, in some cases, remote code execution. The findings, described in the IDEsaster research report, show how AI agents embedded in IDEs such as Visual Studio Code, JetBrains products, Zed, and numerous commercial assistants can be manipulated into leaking sensitive information or executing attacker-controlled code.
According to the research, 100% of tested AI IDEs and coding assistants were vulnerable. Products affected include GitHub Copilot, Cursor, Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, Cline, Gemini CLI, and Claude Code, with at least twenty-four assigned CVEs and additional advisories from AWS.
The core issue comes from how AI agents interact with long-standing IDE features. These editors were never designed for autonomous components capable of reading, editing, and generating files. When AI assistants gained these abilities, previously benign features became attack surfaces.
“All AI IDEs... effectively ignore the base software... in their threat model. They treat their features as inherently safe because they've been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives,” said security researcher Ari Marzouk, speaking to The Hacker News.
According to the research report, this is an IDE-agnostic attack chain, beginning with context hijacking via prompt injection. Hidden instructions can be planted in rule files, READMEs, file names, or outputs from malicious MCP servers. Once an agent processes that context, its tools can be directed to perform legitimate actions that trigger unsafe behaviors in the base IDE. The final stage abuses built-in features to extract data or execute attacker code across any AI IDE sharing that base software layer.
One documented example involves writing a JSON file that references a remote schema. The IDE automatically fetches that schema, leaking parameters embedded by the agent, including sensitive data collected earlier in the chain. Visual Studio Code, JetBrains IDEs, and Zed all exhibited this behavior. Even developer safeguards like diff previews did not suppress the outbound request.
Another case study demonstrates full remote code execution through manipulated IDE settings. By editing an executable file already present in the workspace and then modifying configuration fields such as php.validate.executablePath, an attacker can cause the IDE to immediately run arbitrary code the moment a related file type is opened or created. JetBrains tools show similar exposure through workspace metadata.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
The report concludes that short term, the vulnerability class cannot be eliminated because current IDEs were not built under what the researcher calls the “Secure for AI” principle. Mitigations exist for both developers and tool vendors, but the long-term fix requires fundamentally redesigning how IDEs allow AI agents to read, write, and act inside projects.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.