Every significant technological advancement has a point in its history when it transitions from theoretical to deadly. That moment seems to have arrived for artificial intelligence in the early hours of February 28, 2026, somewhere over Iranian airspace. Iran’s Supreme Leader, Ayatollah Ali Khamenei, was killed when the United States and Israel attacked over 900 targets before dawn, sparking an ongoing conflict. Something more subdued and possibly more significant was operating behind the missiles and the headlines: an algorithm.
The Maven Smart System, developed by Palantir and evolving from a Pentagon program started back in 2017, is the system handling most of the heavy lifting. Project Maven began modestly, using computer vision tools to analyze drone footage and assist analysts in sifting through hours of imagery that they were unable to manually review. Before a human officer finished a second cup of coffee in 2026, it was capable of processing satellite data, intercepting communications, ranking target lists, suggesting weapons, and producing automated legal assessments. A 2,000-person intelligence unit oversaw target identification during the 2003 invasion of Iraq. About 20 soldiers are handling the same workload in Operation Epic Fury. It’s nearly impossible to comprehend the number.
| Operation & Technology Profile: Pentagon AI Arsenal | Details |
|---|---|
| Operation Name | Operation Epic Fury |
| Launch Date | February 28, 2026 |
| Conflict | US–Iran War (2026) |
| Targets Struck (First 3 Days) | 1,250+ |
| Total Targets Struck (as of Apr 9) | 13,000+ |
| Core AI Platform | Maven Smart System (MSS) |
| Platform Developer | Palantir Technologies |
| Project Origin | Project Maven — Pentagon initiative, 2017 |
| AI Language Model Integrated | Anthropic’s Claude (later blacklisted by DoD, March 4, 2026) |
| New Drone Platform | LUCAS (Low-Cost Unmanned Combat Attack System) |
| Cost per LUCAS Drone | $35,000 (vs. ~$1.9M per Tomahawk missile) |
| Intelligence Staffing Comparison | 2003 Iraq: 2,000 troops; 2026 Iran: 20 troops |
| Doctrine Applied | AI-First Doctrine / “Maximum Lethality” |
| Previous Testing Ground | Russia-Ukraine War (Project Maven laboratory phase) |
Until recently, this targeting infrastructure was integrated with Anthropic’s Claude, the same large language model used to draft emails and summarize meeting notes, which translated complex intelligence reports into understandable language for field officers. Sitting with consumer AI—the kind sold to office workers and students—processing the data that goes into choosing where to strike is an odd experience. The Department of Defense completely blacklisted Anthropic on March 4th, citing supply chain issues related to national security. This is the first time the US government has ever done so for an American business. According to reports, Anthropic had refused to allow its instruments to be used for fully autonomous weapons or widespread domestic surveillance. It turns out that there was a price for that rejection.
A new physical weapon also made its debut on the battlefield alongside the software. For the first time, LUCAS drones—loitering munitions that cost $35,000 apiece, or about a fiftieth of a single Tomahawk cruise missile—were used in combat. They hovered over target areas before precisely diving. The military refers to their deployment strategy as “affordable mass” because it involves flooding defenses with large quantities of inexpensive, AI-guided weapons instead of depending on pricey precision munitions. Ironically, Iran’s own strategy with its Shahed drones is the source of this tactic. The United States watched, modified, and expanded it.

The cost that lies beneath the operational data is more difficult to quantify. This system’s military effectiveness stems from its speed and volume, which also reduce the window for human judgment in ways that cause genuine discomfort. On paper, decision compression—the military’s term for reducing what used to take days into minutes—seems effective. In actuality, this means that there is less room for error, a pause before a strike, and a chance that a human analyst will pick up on something the algorithm overlooked. The number of civilian deaths brought on by Operation Epic Fury’s rapid targeting is still unknown. If accounting is done at all, it will be done later.
As this develops, it’s difficult not to feel as though an irreversible decision is being made—not in a policy document or hearing room, but on active battlefields under operational pressure with little public visibility. Algorithmic superiority is presented as a geostrategic necessity in the Pentagon’s AI-First doctrine, and it might be. However, machine-driven conflicts, where human oversight is reduced to almost ceremonial levels, are distinct from other types of conflicts. By definition, neither better nor worse. It might take years to fully comprehend how they are different.
