• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AISecurity

AI coding tools exploded in 2025. The first security exploits show what could go wrong

Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
December 15, 2025, 10:00 AM ET
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.
While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses.Illustration by Simon Landrein

AI coding tools proliferated widely across technical teams in 2025, shifting how developers work and how companies across industries develop and launch products and services. According to Stack Overflow’s 2025 survey of 49,000 developers, 84% said they’re using the tools, with 51% doing so daily. 

AI coding tools have also caught the interest of another group: malicious actors. While a breach of the tools hasn’t so far caused a wide-scale attack, there have been a few exploits and near-misses, and cyberthreat researchers have discovered critical vulnerabilities in several popular tools that make clear what could go horribly wrong. 

Any emerging technology creates a new opening for cyberattacks, and in a way, AI coding tools are just another door. At the same time, the agentic nature of many AI-assisted coding capabilities makes it crucial for developers to check every aspect of the AI’s work, making it easy for small oversights to warp into critical security issues. Security experts also say the nature of how AI coding tools function makes them susceptible to prompt injection and supply-chain attacks, the latter of which are especially damaging as they affect companies downstream that use the tool.

“Supply chain has always been a weak point in security for software developers in particular,” said Randall Degges, head of developer and security relations at cybersecurity firm Snyk. “It‘s always been a problem, but it’s even more prevalent now with AI tools.” 

The first wave of AI coding tool vulnerabilities and exploits

Perhaps the most eye-opening security incident involving AI coding tools that unfolded this year was the breach of Amazon’s popular Q coding assistant. A hacker compromised the official extension for using the tool inside the ubiquitous VS Code development environment, planting a prompt to direct Q to wipe users’ local files and disrupt their AWS cloud infrastructure, potentially even disabling it. This compromised version of the tool passed Amazon’s verification and was publicly available to users for two days. The malicious actor behind the breach reportedly did it to expose Amazon’s “security theater” rather than actually execute an attack, and in that way, they were successful—the demonstration of how a prompt injection attack on an AI coding tool could unfold sent a shock wave of concern throughout the security and developer worlds. 

“Security is our top priority. We mitigated an attempt to exploit a known issue in two open-source repositories to alter code in the Amazon Q Developer extension for VS Code. No customer resources were impacted,” an Amazon spokesperson told Fortune, pointing to the company’s July security bulletin on the incident.   

In the case of AI coding tools, a prompt injection attack refers to a threat actor slipping instructions to an AI coding tool to direct it to behave in an unintended way, such as leaking data or executing malicious code. Aside from Q, critical vulnerabilities leaving the door open to this style of attack were also discovered throughout 2025 in AI coding tools offered by Cursor, GitHub, and Google’s Gemini. Cybersecurity firm CrowdStrike also reported that it observed multiple threat actors exploiting an unauthenticated code injection vulnerability in Langflow AI, a widely used tool for building AI agents and workflows, to gain credentials and deploy malware.

The issue was not so much a security flaw within any of the tools in particular, but rather a vulnerability at the system level of how these agents function—connecting to an essentially unlimited number of data sources through MCP, an open standard for connecting AI models to external tools and data sources. 


“Agentic coding tools work within the privilege level of the developer executing them,” said John Cranney, VP of engineering at Secure Code Warrior, a coding platform designed to help developers work more securely. “The ecosystem around these tools is rapidly evolving. Agentic tool providers are adding features at a rapid pace, while at the same time, there is an explosion of MCP servers designed to add functionality to these tools. However, no model provider has yet solved the problem of prompt injection, which means that every new input that is provided to an agentic coding tool adds a new potential injection vector.” 

In a statement, a Google spokesperson echoed that the state of guardrails in today’s AI landscape depends heavily on the model’s hosting environment. 

“Gemini is designed and tested for safety, and is trained to avoid certain outputs that would create risks of harm. Google continuously improves our AI models to make them less susceptible to misuse. We employ a hybrid agent security approach using adversarial training to resist prompt injection attacks and policy enforcement to review, allow, block, or prompt for clarification on the agent’s planned actions,” the company said.

The prevalence of AI coding tools is also giving a boost to another attack route, often referred to as “typosquatting.” This refers to malicious actors impersonating a legitimate software package or extension to trick an unwitting coder—or now, an AI—into downloading a malicious one instead, usually by slightly tweaking the name and legitimizing it with fake reviews. In one case this year, Zak Cole, a core developer for the cryptocurrency Ethereum, said his crypto wallet was drained after he mistakenly downloaded a malicious extension for the popular AI coding tool Cursor. This could have happened with any malicious software and isn’t necessarily specific to the coding assistant, but AI coding tools can amplify the risk because, increasingly, they’re doing this work on their own and possibly unsupervised. Cursor and DataStax, the owner of Langflow AI, did not respond to a request for comment.

“If you’re using a tool like Cursor to help you write code, it’s also doing a lot of other things like installing third party dependencies, packages, and tools,” said Degges of Snyk. “We’ve noticed that because it’s going to go ahead and do a lot of these things in an agentic fashion, you as the user are typically much more at risk of malicious packages that AI installs.”

The AI coding guardrails every organization needs

As AI coding tools simultaneously introduce new risks and make it possible for developers to create more code faster than ever before, CrowdStrike field CTO Cristian Rodriguez believes the challenge for organizations is if they can secure applications at the same velocity that they’re building them.

He said having the right guardrails in place can help, and he advises companies to mature their SecOps programs and bolster governance around AI coding tools. This includes cracking down on “shadow AI,” making sure no tools are being used internally without being approved and managed as part of the company’s overall security infrastructure. For whatever AI coding tools are approved, the company also needs to continuously manage everything it touches.

“Understand what the services are that are being referenced from the application, the libraries that are being used, the services that surround the application, and to make sure they are configured properly,” he said. “Also, ensure the services have the right identity and access management components to ensure that not anyone can simply access the service that surrounds the app.”

In a statement, a GitHub spokesperson said the company designed its Copilot coding agent to proactively and automatically perform security and quality analysis of the code it creates to ensure vulnerabilities in code and dependencies are detected and remediated.

“We believe that building secure and scalable MCP servers requires attention to authentication, authorization, and deployment architecture, and we follow a strict threat model when developing agentic features, including MCP,” the spokesperson said. “To prevent risks like data exfiltration, impersonation, and prompt injection, we’ve created a set of rules that includes ensuring all context is visible, scanning responses for secrets, preventing irreversible state changes, and only gathering context from authorized users.”

Rodriguez’s colleague at CrowdStrike, Adam Meyers, the firm’s head of intelligence, noted that AI coding tools often run in an unmanaged or “headless” capacity, doing a bunch of things in the background. This makes developers the last line of defense.

“It spits out hundreds of lines of code in minutes,” he said. “And then it comes down to, do they do a security assessment of that code? Do they look at all the libraries the code can pull down, or do they just say, YOLO, and deploy it? And I think that that’s the true risk here.”

Read more about The Year in AI—and What's Ahead in the latest Fortune AIQ special report, reflecting on the AI trends that took over the business world and captivated consumers in 2025. Plus, tips on preparing for new developments in 2026.

About the Author
Sage Lazzaro
By Sage LazzaroContributing writer

Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

See full bioRight Arrow Button Icon

Latest in AI

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025

Most Popular

Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Finance
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam
By Fortune Editors
October 20, 2025
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.


Most Popular

placeholder alt text
Success
'I had to take 60 meetings': Jeff Bezos says 'the hardest thing I've ever done' was raising the first million dollars of seed capital for Amazon
By Dave SmithDecember 15, 2025
24 hours ago
placeholder alt text
Success
Meetings are not work, says Southwest Airlines CEO—and he’s taking action, by blocking his calendar every afternoon from Wednesday to Friday 
By Preston ForeDecember 15, 2025
1 day ago
placeholder alt text
Success
Bad luck, six-figure earners: Elon Musk warns that money will 'disappear' in the future as AI makes work (and salaries) irrelevant
By Orianna Rosa RoyleDecember 15, 2025
1 day ago
placeholder alt text
Personal Finance
Current price of silver as of Monday, December 15, 2025
By Joseph HostetlerDecember 15, 2025
1 day ago
placeholder alt text
AI
Deloitte's CTO on a stunning AI transformation stat: Companies are spending 93% on tech and only 7% on people
By Nick LichtenbergDecember 15, 2025
1 day ago
placeholder alt text
North America
Ford writes down $19.5 billion as it pivots electric Lighting line of vehicles
By Sasha RogelbergDecember 15, 2025
20 hours ago

Latest in AI

Matt Garman speaks on stage in front of a screen showing colorful concentric circles on a black background.
Future of WorkAmazon
AWS CEO says replacing young employees with AI is ‘one of the dumbest ideas’—and bad for business: ‘At some point the whole thing explodes on itself’
By Sasha RogelbergDecember 16, 2025
1 minute ago
Trump speaks in the Oval Office
Big TechDonald Trump
You can make up to $200K working in Trump’s new ‘Tech Force’—and you don’t need a degree or work experience
By Dave SmithDecember 16, 2025
2 hours ago
InnovationTesla
An MIT roboticist who cofounded bankrupt Roomba maker iRobot says Elon Musk’s vision of humanoid robot assistants is ‘pure fantasy thinking’
By Marco Quiroz-GutierrezDecember 16, 2025
3 hours ago
Justina
Future of Workskills
Can’t get a job? Blame AI? Train in ‘power skills,’ IBM exec says: ‘You can’t hire a college student now to just come in and create a spreadsheet’
By Nick LichtenbergDecember 16, 2025
4 hours ago
Detroit, Michigan, Residents picket DTE Energy, opposing the electric utility's plan to provide power for a proposed $7 billion data center in rural Michigan.
EnvironmentData centers
A grassroots NIMBY revolt is turning voters in Republican strongholds against the AI data-center boom
By Eva RoytburgDecember 16, 2025
9 hours ago
slop
CybersecurityCulture
The word of the year is ‘slop,’ Merriam-Webster says
By Anna Furman and The Associated PressDecember 15, 2025
18 hours ago