The United States is entering a new phase of strategic competition—one where artificial intelligence is no longer an emerging capability, but a decisive element of military power. In this unfolding AI arms race, speed matters. Capability matters. But above all, control matters. That’s why the recent standoff between Anthropic and the Pentagon should concern anyone focused on America’s national security.
At the center of the dispute is a simple but profound disagreement: who gets to decide how advanced AI systems are used in a military context. Anthropic, the developer of Claude and its super-powered model Mythos, sought to impose limits on how its technology could be deployed by drawing red lines around certain applications of its technology. The Pentagon, for its part, insisted that it must retain the ability to use AI tools for all lawful purposes in defense of the nation. When those positions proved irreconcilable, the relationship collapsed.
Anthropic was ultimately designated a supply chain risk, and the Department of War was forced to look elsewhere for AI capabilities. Since then, details about its model Mythos—dubbed “too dangerous” for public release—were uncovered and add new, alarming concerns. Mythos reportedly is capable of autonomously identifying and weaponizing undiscovered cybersecurity vulnerabilities, and would mean open season for cybercriminals without appropriate guardrails. The new tool is potentially so powerful that Anthropic has limited access to it.
This episode should serve as a wake-up call because it demonstrates how the current structure of America’s AI ecosystem—a black box, driven by closed systems that lack transparency—is fundamentally misaligned with the requirements of national defense.
Today, the Pentagon purchases access to AI capabilities, but it does not control them. The training, testing, and ongoing development of these models remain firmly in the hands of private companies that have their own governance frameworks, risk tolerances, and commercial incentives. That reality creates a dangerous dynamic: it gives a small number of unaccountable private firms effective veto power over how the United States can employ one of the most consequential technologies of our time. That is not a sustainable model for a constitutional republic. Nor is it a viable foundation for military dominance.
A system constrained by external approval processes, shifting corporate policies, or the risk of sudden disruption is a system that cannot move at the pace modern warfare demands. And in a strategic competition defined by iteration cycles measured in weeks—not years—those constraints do more than slow the United States down. They create openings.
China and its aligned partners, for example, are moving aggressively to deploy AI capabilities at scale, leveraging open-source models that can be adapted for a wide range of military and intelligence applications. Systems like DeepSeek are not constrained by the same corporate governance structures that shape American firms.
They are designed to be modified, extended, and integrated across a broad ecosystem that includes not only China’s military, but also a growing network of partner nations at odds with America.
That creates an asymmetric threat. While the United States debates the permissible uses of AI through contracts with private vendors, its competitors are building flexible, state-aligned systems that can be rapidly customized for operational needs. If that gap persists, America risks finding itself at a significant military disadvantage.
The solution is not to abandon the private sector, which remains a source of extraordinary innovation and technical leadership. Nor is it to discard ethical considerations, which must remain central to how the United States approaches the use of force. But it does mean recognizing that the current model—where the government rents access to closed, proprietary systems it cannot fully control—is inadequate for the demands of strategic competition.
Washington must begin investing in a different approach: the development of high-performing, secure, and adaptable open-source AI models that the United States government and its closest allies can control, audit, and deploy without external constraint. None of this eliminates the need for careful guardrails. There are important and legitimate debates to be had about the role of AI in warfare; from autonomy and targeting to surveillance and escalation. But those debates should be led by elected officials and military leaders accountable to the American people, not dictated by the acceptable-use policies of private companies.
This strategic realignment could take several forms. It may involve government-led model development, partnerships with trusted research institutions, or the creation of open-weight models designed specifically for defense applications. It could include allied frameworks that ensure interoperability while preserving national control, as well as new procurement strategies that prioritize transparency and modifiability over convenience.
Regardless of the path chosen, however, success will depend on getting the mechanism right.
The United States has long understood that it cannot outsource the foundations of its security. We build our own ships. We design our own weapons. We maintain command of the systems that underpin our military advantage. Artificial intelligence should be no different.
Building effective public-private partnerships that serve the national defense will require more than technical capability—it will require trust, integrity, and sound process. That means establishing clear guardrails, aligning incentives, and ensuring that both government and industry share responsibility for the risks and outcomes of deploying these systems. Done right, such a framework can harness private-sector innovation while preserving the government’s authority over how these capabilities are ultimately used.
The Anthropic episode risks being not an anomaly, but a preview. Unless we act now to ensure that America—and its allies—have access to AI systems they can truly control, it may also prove to be a warning we failed to heed.
The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.










