Google and OpenAI Employees Back Anthropic’s Defense Stance
Anthropic has reached a standoff with the United States Department of War over the military’s request for broad access to the company’s AI systems. As a Friday deadline set by the Pentagon approaches, employees across the AI industry have stepped into the debate. More than 300 Google workers and over 60 OpenAI employees have signed an open letter urging their leaders to support Anthropic and reject the government’s demands.
At the center of the dispute lies a clear disagreement about how artificial intelligence should be used. Anthropic has refused to allow its technology to support domestic mass surveillance or fully autonomous weapons. The company argues that these uses cross ethical and legal boundaries that should not shift under pressure.
The open letter reflects growing concern among technical staff inside major AI firms. Its signatories ask company executives to stand together rather than negotiate separately with the government. The letter warns that dividing companies through pressure tactics could weaken shared safeguards.
“They’re trying to divide each company with fear that the other will give in,” the letter states. “That strategy only works if none of us know where the others stand.”
Tech Giants Resisting the Pentagon’s “Supply Chain” Demands
Employees who signed the letter want Google and OpenAI leadership to uphold the same limits Anthropic has drawn. They argue that consistent standards across companies matter because military contracts often shape how technology develops. If one company accepts unrestricted terms, others may face pressure to follow.
So far, executives at Google and OpenAI have not issued formal responses. Still, public comments suggest sympathy for Anthropic’s position. OpenAI CEO Sam Altman said in a CNBC interview that he does not believe the Pentagon should threaten companies under the Defense Production Act. An OpenAI spokesperson later confirmed that the company shares Anthropic’s opposition to mass surveillance and autonomous weapons.
Individual voices inside Google have also spoken out. Jeff Dean, the chief scientist at Google DeepMind, wrote on social media that “surveillance at this scale undermines the Constitution” and “creates opportunities for political persecution/discrimination.”
The Pentagon currently employs a number of commercial AIs to perform unclassified tasks. It has been reported that the military has considered increasing its partnerships with some of the largest players in the field for classified work. Anthropic currently has a pre-existing partnership with the Department of Defense, although under more restricted terms than those currently being considered.
The Defense Secretary, Pete Hegseth, has taken a hardline stance. He told Anthropic that “if they do not comply, we consider the company a ‘supply chain risk’ and threaten to utilize the Defense Production Act, which allows the government to direct the private sector for the sake of national security.”
OpenAI, Google, and Anthropic Face a Federal Crossroads
In a statement, Anthropic CEO Dario Amodei responded to these threats by pointing out a seeming inconsistency. He said that the government cannot claim the company’s technology poses a national security risk and at the same time claim that the company’s tech is vital to the nation’s defense.
Amodei reiterated the company’s commitment to not allowing the tech to be used for the development of a completely autonomous weapons system or to enable a system of mass surveillance within the country.
The case represents a broader concern for the AI industry. Who regulates the boundaries of the tech? The government says that the nation’s security requires the tech. Some engineers and many researchers see some of the tech’s applications as a risk to the nation’s security, a risk that outweighs the tech’s possible benefits.
No solution has been found. The deadline given to the Pentagon by the government adds pressure to the situation. However, the letter from the company’s employees indicates that the resistance to the government’s demands is widespread. Employees of competing companies seem to be willing to stand together, even though they work for rival companies.
This may affect how AI firms interact with governments in the future. If the firms are united in their stance, this could potentially give them negotiating power to decide how their technologies are used. If the firms are split, this could potentially place the onus on governments.
As the deadline looms closer, a choice is being presented to both sides: will it be collaboration or conflict? The outcome is likely to establish a precedent for how democratic nations balance military necessity and civil freedoms in the era of artificial intelligence.
Comments are closed.