Tag Archives: AI use in War

The Price of Political Obedience: the Yes Men are not Revolting Yet

Co-founder Dario Amodei has made safety and social responsibility central to Anthropic’s approach to AI. Usage restrictions governing its contract with the Pentagon stipulate that its AI cannot be used for domestic mass surveillance or fully autonomous weapons. The Pentagon, which objects to outside limits on what its troops can do, wants unrestricted access for all lawful purposes… If Anthropic was too inflexible, the Pentagon could have simply terminated the contract. But Defense Secretary Pete Hegseth went further, declaring on X that “no contractor, supplier, or partner that does business with the United States military may conduct any commercial activity with Anthropic.” Such declaration, according to commentators, amounted to “corporate murder”, The “message sent to every investor and corporation in America: do business on our terms, or we will end your business.”

Excerpt from Greg Ip, Anthropic’s Pentagon Battle Matters to Every Business, WSJ, Mar. 13, 2026

If You Play with Fire, You ‘Il Get Burnt: Lessons from Anthropic

Anthropic’s artificial-intelligence tool Claude was used in the U.S. military’s operation to capture former Venezuelan President Nicolás Maduro on January 3, 2026, highlighting how AI models are gaining traction in the Pentagon. The mission to capture Maduro and his wife included bombing several sites in Caracas in January 2026. Anthropic’s usage guidelines prohibit Claude from being used to facilitate violence, develop weapons or conduct surveillance. The deployment of Claude occurred through Anthropic’s partnership with data company Palantir, whose tools are commonly used by the Defense Department and federal law enforcement

Excerpt from Pentagon Used Anthropic’s Claude in Maduro Venezuela Raid, WSJ, Feb. 15, 2025

See also Trump orders government to stop using Anthropic in battle over AI use, WSJ Feb. 27, 2026