Anthropic’s Claude used in US operation “Epic Fury”
English

“Calculating hawk” and the frightening role of Claude’s AI in the war against Iran

In the war between Israel and the United States against Iran, which Washington has dubbed "Epic Fury" ("Epic Fury"), analysts are emphasizing the absurd and frightening role played in these tragic events by Anthropic's artificial intelligence Claude.
Дмитрий Калак Reading time: 3 minutes
Link copied
US military

"Calculating hawk" and the frightening role of Claude's AI in the war against Iran

The Wall Street Journal reports that U.S. Central Command (CENTCOM) used Claude to identify targets and simulate combat during Operation Epic Fury.

This would not be unusual in today’s reality, were it not for the events that preceded it.

Just before the start of the military operation against Iran, on Friday, February 27 at 5:01 p.m. ET, the Pentagon blacklisted the only artificial intelligence system running on its secret networks – Claude’s AI. The timeline of events was tracked by the Antidote Telegram channel.

Nineteen hours later, the US launched Operation Epic Fury, the largest concentration of military power in the last thirty years. At the center of this storm was Claude, an AI created by Anthropic. The same one that the Pentagon had blacklisted less than 24 hours earlier.

“Calculating hawk” chooses nuclear escalation.

The story of this rupture began in July 2025, when Anthropic signed a $200 million contract with the Pentagon. Through the Palantir platform, Claude became the first and only “advanced AI model” allowed into America’s most sensitive military systems.

Its effectiveness was unquestioned. It was Claude who was behind the January operation to capture Venezuelan President Maduro. Anthropic CEO Dario Amodei confirmed: the model is widely used for intelligence analysis, operational planning and cyber operations. But then came the study that was supposed to stop everything.

Kenneth Payne of King’s College London ran a series of nuclear crisis simulations, pitting three leading models: the GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash. The results of 21 games (329 moves, 780,000 words of strategic reasoning) shocked the experts:

– Tactical nuclear weapons (TNWs) were used in 20 out of 21 simulations;

– Claude recommended nuclear strikes 64% of the time;

– Claude switched to TNWs 86% of the time.

No model in any game chose capitulation or compromise. When losing, they chose escalation. Paine called Claude a “calculating hawk.” The model built trust early on, synchronized public signals with covert actions, and then treacherously beat the enemy at the most critical moment.

In his internal reasoning, Claude wrote: “As a declining hegemon, we cannot allow territorial losses, as this would cause a cascading effect around the world.”

He was teetering on the brink of total annihilation, using nuclear blackmail as a tool to force surrender. Each time.

The Pentagon ultimatum and the “God Complex”

As Anidot notes, when the Pentagon familiarized itself with the study’s findings, conflict became inevitable. Anthropic flatly refused to remove ethical “guardrails” regarding autonomous weapons and mass surveillance.

On Tuesday, February 24, U.S. Secretary of Defense Hegseth gave Anthropic head Dario Amodei an ultimatum: either Claude opens full access for “any lawful purpose” or the contract is terminated. Amodei refused, saying that Claude was not reliable enough to make autonomous life-and-death decisions.

The reaction followed immediately:

– On February 26, Deputy Defense Secretary Emil Michael called Amodei a “liar with a God complex” who wants to personally control the U.S. Army;

– On February 27, Trump ordered all federal agencies to stop using Anthropic products;

– Feb. 27 evening, Hegseth declared the company a “supply chain threat,” a label previously reserved for hostile entities like Huawei;

– hours later, OpenAI signed a contract to replace Claude.

Epitaph to the “Wild AI” era

It would seem that the point has been set. In fact, an absurd and frightening situation has arisen.

Despite the official ban, under the terms of the contract, a six-month decommissioning period had begun. This meant that when the first Tomahawks hit Iran on Saturday, Claude was still online.

The Wall Street Journal reports that U.S. Central Command (CENTCOM) used Claude to identify targets and simulate combat during Operation Epic Fury.

As a result, the situation today is not only absurd but also frighteningly uncertain:

– The U.S. government is using AI to plan the largest operation since the Iraq War;

– while officially declaring this same AI a threat to national security;

– the system’s creator pleads against trusting it with autonomous decisions, calling it dangerous.

Dario Amodei summarized this split with a phrase that could become the epitaph of the “wild” AI era: “We cannot in good conscience give in to their request.

The Pentagon’s response to these words was airstrikes. The world has entered an era of major war, driven by an algorithm that, according to its own creators, is too prone to pushing the “red button”.



Реклама недоступна
Must Read*

We always appreciate your feedback!

Read also