
The mission included the bombing of several sites in Caracas. The use of the model for such purposes contradicts Anthropic’s public policy. The company’s rules explicitly forbid the use of AI for violence, weapons development or organizing surveillance, WSJ writes, citing sources.
“We cannot comment on whether Claude or any other model was used in a particular operation – covert or otherwise. Any use of LLM – whether in the private sector or government agencies – must comply with our policies governing how neural networks are deployed. We are working closely with partners to ensure compliance,” said an Anthropic spokesperson.
Implementation of Claude in the structures of the Ministry of Defense became possible thanks to Anthropic’s partnership with Palantir Technologies. The latter’s software is widely used by military and federal law enforcement agencies.
After the raid, an Anthropic employee asked a colleague from Palantir what exactly the role of the neural network played in the operation to capture Maduro, writes WSJ. A spokesperson for the startup said the company had not discussed the use of its models in specific missions “with any partners, including Palantir,” limiting itself to technical issues.
“Anthropic is committed to using advanced AI in support of U.S. national security,” the spokesperson added.
Anthropic vs. the Pentagon?
Pentagon spokesman Sean Parnell announced a review of the relationship with the AI lab.
“Our country needs partners willing to help warfighters win any war,” he said.
In July 2025, the U.S. Department of Defense awarded contracts worth up to $200 million to Anthropic, Google, OpenAI and xAI to develop AI security solutions. The department’s Chief Digital and AI Technology Office planned to use their developments to create agent-based security systems.
However, as early as January 2026, WSJ reported on the risk of breaking the agreement with Anthropic. The disagreement arose because of the startup’s strict ethics policy. The rules prohibit the use of the Claude model for mass surveillance and autonomous lethal operations, which limits its use to intelligence agencies like ICE and the FBI.
Officials’ displeasure has intensified amid the integration of the Grok chatbot into the Pentagon’s network. Defense Secretary Pete Hegseth, commenting on the partnership with xAI, stressed that the agency “will not use models that don’t allow wars to be fought.”
Pressure on developers
Axios, citing sources, wrote that the Pentagon is pressuring the four major AI companies to allow the US Army to use the technology for “all legitimate purposes.” This includes weapons development, intelligence gathering and combat operations.
Anthropic refuses to lift restrictions on spying on US citizens and building fully autonomous weapons. Negotiations have stalled, but it is difficult to quickly replace Claude because of the model’s technological superiority in specific government missions.
In addition to the Anthropic chatbot, the Pentagon uses ChatGPT from OpenAI, Gemini from Google and Grok from xAI in unclassified tasks. All three have agreed to relax restrictions on regular users.
There are now discussions about moving LLMs to the classified circuit and using them “for all legitimate purposes.” One of the three companies has already agreed to do this, the other two are “being more flexible” than Anthropic.









