Tech workers demand limits after Iran strikes and Pentagon blacklists Anthropic
MARKET INSIDER – As U.S. military strikes on Iran intensify, a parallel battle is unfolding inside Silicon Valley. Hundreds of employees at Google, OpenAI, and other leading AI firms are calling for strict limits on how their technologies are used by the Pentagon—reviving a years-old internal conflict over artificial intelligence and warfare.
An open letter titled “We Will Not Be Divided” quickly grew to nearly 900 signatories, including employees from Google and OpenAI, after the U.S. Department of Defense blacklisted Anthropic as a “supply chain risk.” The move followed Anthropic’s refusal to allow its models to be used for mass surveillance or fully autonomous weapons. Workers argue that government pressure risks fragmenting tech companies and eroding ethical guardrails just as AI capabilities accelerate.
The backlash comes amid reports that Google is in discussions with the Pentagon to deploy its Gemini AI model in classified environments—echoing previous controversies such as Project Maven, a 2018 drone analysis contract that triggered employee protests and forced the company to step back. At stake is not only reputational risk but the strategic direction of frontier AI development: whether leading models become embedded in defense infrastructure or remain constrained by corporate ethics frameworks.
Activist coalition No Tech For Apartheid has also urged cloud giants—including Google, Amazon, and Microsoft—to reject Defense Department terms that could enable mass surveillance or abusive uses of AI. Internal dissent reportedly surfaced again within Google, where more than 100 AI-focused employees expressed concerns to leadership about expanding military contracts. Jeff Dean, Google’s chief scientist, publicly warned that mass surveillance undermines constitutional protections and is vulnerable to political misuse.
The Pentagon’s decision to designate Anthropic a risk has amplified fears of retaliation against firms that refuse military demands. A separate letter signed by hundreds of tech workers called on Congress to scrutinize the use of national security authorities against private AI companies. The broader tension reflects a deeper shift: artificial intelligence is no longer merely a commercial race but a geopolitical asset central to defense strategy.
For global investors and policymakers, this is more than a workplace dispute. AI infrastructure now intersects directly with national security, export controls, and sovereign digital capability. If employee resistance constrains partnerships between Big Tech and the U.S. military, it could reshape competitive dynamics with China and alter the pace at which advanced AI systems integrate into defense networks.
Silicon Valley once debated whether AI should recommend ads or optimize logistics. Now the question is whether it should guide battlefield decisions. As governments escalate military deployments and corporations accelerate model deployment, the defining contest may not be AI versus adversaries—but engineers versus their own employers.