Anthropic has introduced Claude Code Security, the company’s first product aimed at using AI models to help security teams keep up with the flood of software bugs they’re responsible for fixing. For large companies, unpatched software bugs are a leading cause of data breaches, outages, and regulatory headaches—while security teams are often overwhelmed by how much code they have to protect.
Now, instead of just scanning code for known problem patterns, Claude Code for Security can review entire codebases, more like a human expert would—looking at how different pieces of software interact and how data moves through a system. The AI double-checks its own findings, rates how severe each issue is, and suggests fixes. But while the system can investigate code on its own, it does not apply fixes automatically, which could be dangerous in its own right—developers must review and approve every change.
Claude Code Security builds on over a year of research by the company’s Frontier Red Team, an internal group of about 15 researchers tasked with stress-testing the company’s most advanced AI systems and probing how they might be misused in areas such as cybersecurity.
The Frontier Red Team’s most recent research found that Anthropic’s new Opus 4.6 model has significantly improved at finding new, high-severity vulnerabilities—software flaws that allow attackers to break into systems without permission, steal sensitive data, or disrupt critical services—across vast amounts of code. In fact, in testing open source software that runs across enterprise systems and in critical infrastructure, Opus 4.6 found some of these vulnerabilities that had gone undetected for decades, and was able to do so without task-specific tooling, custom scaffolding, or specialized prompting.
Frontier Red Team leader Logan Graham told Fortune that Claude Code Security is meant to put this power in the hands of security teams that need to boost their defensive capabilities. The tool is being released cautiously as a limited research preview for its Enterprise and Team customers. Anthropic is also giving free expedited access to maintainers of open-source repositories—the often under-resourced developers responsible for keeping widely used public software running safely.
“This is the next step as a company committed to powering the defense of cybersecurity,” he said. “We are now using [Opus 4.6] meaningfully ourselves, we have been doing lots of experimentation—the models are meaningfully better.” That is particularly true in terms of autonomy, he added, pointing out that Opus 4.6’s agentic capabilities mean it can investigate security flaws and use various tools to test code. In practice, that means the AI can explore a codebase step by step, test how different components behave, and follow leads much like a junior security researcher would—only much faster.
“That makes a really big difference for security engineers and researchers,” Graham said. “It’s going to be a force multiplier for security teams. It’s going to allow them to do more.”
Of course, it’s not just defenders that look for security flaws—attackers are also using AI to find exploitable weaknesses faster than ever, Graham said, so it’s important to make sure that improvements favor the good guys. So in addition to the research preview, he said Anthropic is investing in safeguards to detect malicious use and when attackers might be using the system.
“It’s really important to make sure that what is a dual-use capability gives defenders a leg up,” he said.












