Google, OpenAI employees revolt over military AI deals after U.S. strikes on Iran

New Delhi: Employees at several major artificial intelligence companies are pushing back against the use of their technology in military programmes. Hundreds of workers from companies including Google and OpenAI have signed open letters demanding stricter limits on how their employers engage with defence agencies. The internal protests intensified after recent U.S. strikes on Iran and growing scrutiny of AI partnerships with the Pentagon.

These letters reveal that the tech industry is increasingly concerned about the potential to use the state of the art AI systems in war, surveillance and autonomous weaponry. Workers indicate that organisations aiming to create potent AI applications must establish ethical limits conspicuously and then consent to defence tasks. A conflict of interest between national security concerns and the responsibility of concerns of technology firms is also being pointed out in the debate.

Open letter gains momentum across companies

One of the letters, titled “We Will Not Be Divided,” quickly gained traction among employees. The petition reportedly grew from a few hundred signatures to nearly 900 within days. Around 800 of the signatories are from Google, while nearly 100 are employees of OpenAI.

The letter criticises the U.S. Department of Defense for allegedly pressuring companies individually to cooperate on military AI projects. Workers argue that this strategy could weaken resistance by isolating firms rather than addressing concerns across the industry. Signatories warn that divisions among technology companies could make it easier for defence agencies to push controversial AI uses.

Pentagon decision on Anthropic fuels backlash

The controversy intensified after the Pentagon reportedly designated Anthropic as a “supply chain risk,” a move that effectively blocks the company’s models from federal use. The decision followed reports that Anthropic refused to allow its technology to be used for mass surveillance or fully autonomous weapons systems.

Industry employees say the move raises questions about whether companies could face penalties for declining certain military applications. Another open letter signed by workers from companies including Salesforce, Databricks and IBM urged the Defense Department to reconsider the designation.

Google faces renewed scrutiny over defence ties

The issue has placed Google under renewed attention. Reports suggest the company is in discussions with the Pentagon about deploying its Gemini AI model in classified government systems. The development has revived memories of the backlash against Project Maven in 2018, when employee protests forced Google to step away from a drone-analysis contract with the military.

More than 100 Google employees working in AI have reportedly signed a separate internal letter asking the company to adopt clear restrictions similar to those proposed by Anthropic. They specifically urged leadership to reject AI uses connected to mass surveillance or autonomous weapons.

Concerns have also been raised by advocacy groups such as No Tech For Apartheid, which has urged major cloud providers to reject defence contracts that could enable harmful surveillance or military applications. The coalition has pointed to potential deployments of Google’s Gemini model in classified environments as a key example.

The growing debate highlights how artificial intelligence is becoming central to modern defence strategies. As governments seek access to powerful AI tools, tech companies are increasingly facing pressure from their own employees to set clear ethical limits on how those systems are used.

Comments are closed.