Pentagon standoff is a decisive moment for how AI will be used in war
The fight between the Department of Defense and artificial intelligence company Anthropic has ostensibly been about a $200 million contract over the use of AI in classified systems.
But as the two sides careen toward a 5:01 p.m. Friday deadline over terms of the contract, far more is at stake.
Amid the legalese and heated rhetoric are questions being asked globally about how to use AI, what the technology’s risks are and who gets to decide on setting any limits — the makers of AI or national governments.
Underlying it all is fear and awe over the dizzying pace of AI progress and the technology’s uncertain impact on society.
“Something like this dispute was inevitable,” said Michael C. Horowitz, who worked on AI weapons issues in the Defense Department during the Biden administration. “Because the technology is advancing so quickly, we’re having these debates now. AI has moved from being in a niche conversation to something really at the center of global power.”
An hour before the deadline Friday, President Donald Trump weighed in on the fight, posting on social media that he would “NEVER ALLOW A RADICAL LEFT, WOKE COMPANY TO DICTATE HOW OUR GREAT MILITARY FIGHTS AND WINS WARS!” That decision, he said, “belongs to YOUR COMMANDER-IN-CHIEF, and the tremendous leaders I appoint to run our Military.”
The clash centers on the Pentagon’s use of a classified version of Anthropic’s AI model, Claude. The company wants to embed safeguards in its technology to prevent its use for mass domestic surveillance of Americans or in fully autonomous weapons with no humans in the loop.
Story continues below this ad
The Pentagon has said that it has no plans to use the technology for those purposes but that a private contractor cannot decide how its tools will be lawfully used for national security, just as a weapons manufacturer does not determine where its missiles are dropped.
At the Pentagon, the dispute comes at an important moment. Defense Secretary Pete Hegseth, a former Fox News contributor who has lashed out at policies and companies he sees as too liberal, wants to aggressively integrate AI in war planning and weapons development. Hegseth is echoing Trump, who has made the expansion of AI a cornerstone of his policies.
But Anthropic, a 5-year-old company worth about $380 billion, has staked its reputation on AI safety and raised concerns about the technology’s dangers, even as it has collaborated with U.S. defense and intelligence agencies. It is the only AI company currently operating on the Pentagon’s classified systems.
In recent days, the Pentagon and Anthropic have showed no signs of backing down. Sean Parnell, the Pentagon spokesperson, posted on social media Thursday that the Pentagon demanded that Anthropic allow it to use AI “for all lawful purposes,” saying it was a “common-sense request.”
Story continues below this ad
In response, Dario Amodei, Anthropic’s CEO, said the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.” Anthropic was prepared to lose its government contract and help the Pentagon transition to another company’s technology, he said.
Without a compromise, Hegseth has threatened to invoke the rarely used Defense Production Act to force Anthropic to work with it on its terms, or designate the company a supply chain threat and block it from doing business with the government.
The confrontation has created new divisions between Silicon Valley and Washington at a moment when the industry seemed in step with Trump’s tech-forward agenda, especially as Google, xAI and OpenAI are also involved in AI work with the Pentagon.
On Thursday, nearly 50 OpenAI employees and 175 Google employees published a letter calling on their leaders to “refuse the Department of War’s current demands.” More than 100 employees who work on Google’s AI technology expressed concern in another letter to company leaders about working with the Pentagon. Prominent technologists including Jeff Dean, a top Google executive, have also said they are concerned about how AI can be misused for surveillance.
(The New York Times has sued OpenAI and Microsoft, accusing them of copyright infringement of news content related to AI systems. The companies have denied those claims.)
A little over two years ago, AI safety and regulation was a top concern. At a global summit hosted in Britain by then-Prime Minister Rishi Sunak, the United States, China and 26 other countries signed a pledge to address some of the technology’s potential risks, such as giving hackers new attack methods and accelerating disinformation.
But as the AI race ramped up, the issue has faded as a priority. Last year, the Trump administration revoked safety policies imposed under President Joe Biden. Trump signed an executive order in December aimed at undercutting state laws that regulate AI. He has also lifted restrictions on exports of AI semiconductors, despite concerns that the components could help rivals like China.
The European Union, which passed far-reaching AI regulations in 2024, is now considering rolling some back. At the United Nations, a yearslong effort to ban certain AI weapons has been stalled by opposition from the United States, Russia and other countries.
On the battlefield, the war in Ukraine has ushered in an era of drone warfare that turned autonomous weapons from a futuristic possibility to a near-term reality.
“As AI becomes more powerful and more capable, the incentives to use it also become much stronger,” said Helen Toner, an AI policy expert at Georgetown University and former OpenAI board member. “At the same time, people’s appetite to talk about risks and how to solve them has gone down.”
Toner said the Anthropic-Pentagon dispute showed a fundamental disconnect. In Washington, officials view AI as a new tool that can be harnessed for specific goals. In Silicon Valley, creators of the technology see it becoming more like an “entity” with sophisticated reasoning that may behave in unexpected and dangerous ways without oversight and refinement, she said.
The fight between the Pentagon and Anthropic began Jan. 9, when Hegseth published a memo calling for AI companies to remove restrictions on their technologies.
“The time is now to accelerate AI integration, and we will put the full weight of the Department’s leadership, resources, and expanding corps of private sector partners into accelerating America’s Military AI Dominance,” he wrote.
Underpinning Hegseth’s strategy was a fundamental shift in military technology. Hardware is in an age of decline. Military contractors have struggled to deliver ships and fighter planes on time and on budget.
Software has become an increasingly powerful tool. Tech executives including Alex Karp, CEO of data analytics company Palantir, which works closely with the federal government, have argued that America’s competitive edge over adversaries will be found in its advances with software.
Anthropic has been a willing partner, providing the government with a special version of Claude that has fewer restrictions. Yet some in the Pentagon viewed the startup with suspicion. Its openness to talking about safety risks put off some in the department’s leadership, who have called the San Francisco company “woke.”
When talks between the Pentagon and Anthropic began over a $200 million contract for use of AI in classified systems, lawyers from both sides quietly traded emails over contract language, said two people involved in the discussions.
Anthropic asked for two things. The company said it was willing to loosen its restrictions on the technology, but wanted guardrails to stop its AI from being used for mass surveillance of Americans or deployed in autonomous weapons with no humans involved. Without those, Anthropic risks damaging its safety-first reputation.
“This is really about the power of the state to determine how AI is being deployed in the world versus companies,” said Robert Trager, co-director of Oxford University’s Martin AI Governance Initiative.
Cordula Droege, the chief lawyer for the International Committee of the Red Cross, which has called for global limits on AI weapons, said the violent risks of introducing swarms of autonomous weapons on battlefields is being lost in the wider debate.
“Throughout history, warfare goes in parallel with the development of technology,” she said.
Comments are closed.