Tech Giants Google and OpenAI Join Forces to Support Anthropic
Rival AI firms have stepped into Anthropic’s growing legal battle with the U.S. government. Engineers and researchers from Google and OpenAI have filed an amicus curiae brief supporting Anthropic’s position in court.
The filing comes after Anthropic launched two lawsuits against the U.S. federal government earlier this week. The company is challenging the government’s decision to label it a “supply chain risk to national security.” That designation would limit or block major companies and government contractors from working with Anthropic.
The case has drawn attention across the AI industry. Some of the most respected technical figures in the field have signed the brief.
Among them is Jeff Dean, the chief scientist at Google and one of the most influential engineers in modern artificial intelligence. The brief lists 37 signatories in total. They describe themselves as engineers, researchers, scientists, and technical professionals who work in AI.
Other signatories include Grant Birkinbine, a security engineer at OpenAI; Sanjeev Dhanda, a software engineer at Google; Leo Gao, a member of the technical staff at OpenAI; Zach Parent, a forward-deployed engineer at OpenAI; Kathy Korevec, director of product at Google Labs; and Ian McKenzie, a research engineer at Google.
Tech Giants File Amicus Brief Defending Ethical “Red Lines” of Anthropic
An amicus curiae brief, Latin for “friend of the court”, allows outside parties to present arguments in a legal case. Courts often receive many such filings in major disputes. Most do little to change the outcome. Still, briefs that come from rivals or respected experts can carry weight.
That dynamic makes this filing unusual. The signatories work at companies that compete directly with Anthropic in the AI race. Their support suggests that parts of the industry view the government’s move as a threat that could reach beyond a single company.
The brief focuses on three core arguments.
First, the authors defend Anthropic’s stance on what the company calls its “red lines.” These limits guide how the company builds and deploys AI systems. Anthropic has refused to support work tied to mass surveillance systems or fully autonomous lethal weapons.
Those restrictions have drawn tension with defense agencies. The dispute reached a breaking point after the federal government labeled the company a security risk.
Second, the brief argues that the government’s action represents an arbitrary use of authority. The authors claim the designation lacks clear justification and could punish a company for ethical decisions about how its technology should be used.
Third, the filing warns that the decision could create wider harm across the AI industry. If the government can blacklist a company over internal policies or disagreements about AI use, researchers may feel pressure to avoid taking strong positions on safety or ethics.
The High-Stakes Battle for AI Autonomy
That risk concerns many people in the field. The brief argues that engineers and scientists should be free to set limits on how their work is used.
The dispute has also drawn public comments from Sam Altman, the CEO of OpenAI. He criticized the government’s decision soon after it became public.
Writing on the social platform X in late February, Altman said the move was a “very bad decision” and expressed hope that officials would reverse it.
Altman also addressed the timing of OpenAI’s own agreement with the U.S. military. The company reached a deal with the Pentagon shortly after the conflict between Anthropic and defense officials escalated.
He admitted the situation “looked opportunistic and sloppy,” though he still opposed the government’s decision to blacklist Anthropic.
The legal fight now moves into federal court. Judges will decide whether the government had the authority to make the national security designation and whether the action followed proper procedure.
For now, the case has already revealed a rare moment of unity in a fiercely competitive industry. Engineers at rival AI companies have chosen to support Anthropic, arguing that the stakes reach far beyond a single firm.
Comments are closed.