Somewhere in the hours before the first strikes on Iranian targets in early March 2026, a large language model was processing battlefield data, synthesizing intelligence feeds, and running scenarios that would have taken a room full of analysts the better part of a day. The model wasn’t a classified government system built over decades in some secure facility in Virginia. It was Claude — made by Anthropic, a San Francisco AI company — running inside military networks, helping commanders decide where to hit and when.
The fact that President Trump had, around the same time, ordered federal agencies to begin phasing out Anthropic’s technology made the whole situation feel less like a policy decision and more like a window into how fast things are actually moving on the ground.
| Field | Details |
|---|---|
| Operation Name | Operation Epic Fury |
| Launched by | U.S. Central Command (CENTCOM) |
| Authorized by | President of the United States |
| Primary Objective | Dismantle Iranian regime’s security apparatus |
| Commenced | March 2026 |
| Commanding Officer | U.S. Navy Admiral Brad Cooper |
| AI System Used | Maven Smart System; Claude (Anthropic LLM) |
| Key Industry Partners | Anthropic, Palantir Technologies |
| Platform Deployed | GenAI.mil (secure military AI platform) |
| Notable Incident | Loss of U.S. KC-135 over Iraq, March 12, 2026; 4 crew members deceased |
| Parallel Israeli Operation | Operation Roaring Lion |
| Reference Website | U.S. Central Command – Official |
Operation Epic Fury, launched by U.S. Central Command at presidential direction, was framed publicly as a campaign to dismantle the Iranian regime’s security apparatus, targeting locations deemed to pose imminent threats. The official language was tight and deliberate, as military language tends to be. But the details leaking out around the edges told a more complicated story — one about an institution moving faster than its own oversight structures, deploying tools that its civilian leadership was simultaneously trying to pull back, and doing so in an actual war zone where the consequences of getting something wrong are measured in lives rather than quarterly earnings.
What CENTCOM used, according to reports from the Wall Street Journal and others, was the Maven Smart System alongside Claude, processing sensor data in real time, compressing what military planners call “sensor-to-commander timelines.”
That phrase sounds technical and dry until you think about what it actually means: the gap between seeing something on a battlefield and deciding to destroy it is getting shorter, and an algorithm is doing much of the compression. Admiral Brad Cooper, the CENTCOM commander, was open enough about AI’s role to address it publicly in late March, which is itself notable. Senior military commanders don’t typically volunteer details about intelligence methods unless they want the message heard.
The tension between operational utility and political direction is genuinely hard to untangle here. Commanders in the field reportedly resisted an immediate cutoff of Claude because the system was already embedded in mission-critical workflows — woven into targeting processes through integrations with firms like Palantir, which has spent years building commercial AI into secure military infrastructure. Pulling it out mid-operation wasn’t a software update.
It was more like removing load-bearing walls from a building while people were still inside. That’s not a justification so much as an observation about how deeply these systems had already taken root before anyone fully debated whether they should.
There’s a sense, watching all of this from the outside, that the ethical conversation about AI in warfare has been running about three years behind the operational reality. The Department of War — the Pentagon’s rebranded operational arm — had already formalized an “AI-first” strategy before Epic Fury began, directing the armed forces to make AI foundational to how they fight, gather intelligence, and plan campaigns.
Seven so-called “pace-setting projects” were outlined, ranging from tactical drone swarm coordination to AI-augmented battle management. GenAI.mil, a secure platform designed to push generative AI tools to millions of service members, was already live. The policy infrastructure for an AI-enabled military had been built. Epic Fury was, in some ways, just the first significant stress test.
One particular detail from the operation is difficult to set aside. Reports surfaced that the Maven Smart System — part of the AI targeting apparatus — flagged a school in the Iranian city of Minab as a potential strike location. It’s still unclear whether that designation was acted upon, overridden, or simply one data point in a larger analysis. But the image itself — an algorithm identifying a school as a military target — is the kind of thing that doesn’t stay abstract for long.
It forces a question that the military’s AI-first strategy doesn’t fully answer: when an AI system makes a recommendation that turns out to be wrong, or worse, that turns out to be a war crime, the accountability trail becomes genuinely murky. Human judgment was supposed to remain in the loop. Whether it always did during Epic Fury is not something any official statement has confirmed.
Anthropic’s position throughout has been notably uncomfortable. The company had initially been an approved provider for classified military use, but Pentagon officials reportedly pushed for broader access — particularly around autonomous weapons and mass surveillance applications — that Anthropic’s own safety guidelines explicitly forbade.
Officials threatened contract cancellations and floated the possibility of labeling the company a supply chain risk. Inside tech companies, employees circulated petitions. The familiar Silicon Valley anxiety about military contracts, which had surfaced years earlier over Google’s Project Maven and Microsoft’s Army headset deal, returned with new intensity. None of it stopped the operational use during the conflict itself.
It’s hard not to notice that this is precisely the scenario that AI ethics researchers had been describing as a risk for years — not a dramatic, science fiction moment of a robot making a unilateral decision, but something quieter and more bureaucratic: a technology so useful, so deeply integrated, and so operationally embedded that removing it becomes politically and practically harder than keeping it. The rules of war are written for humans making decisions.
They weren’t written for systems that compress a targeting cycle to minutes and operate across more data streams than any individual commander can personally review. Updating those rules — through international law, rules of engagement, and accountability frameworks — is the work that now needs to catch up with what happened over Iran in March 2026. Whether it does, and how quickly, will matter considerably the next time a commander has a screen in front of them and a model waiting for input.
