Just sharing my work today. I’ve been at this for a few weeks, and it’s still taking shape. Multiple models, multiple contexts, moderate confusion. v.33, almost ready for publishing.
Autonomy and agentic behaviour overlap, but agentic AI is a qualitatively stronger and riskier capability when applied to weapons. What’s emerging looks more like systems fighting systems.
Autonomous systems follow pre-programmed rules, think heat-seeking missiles or patrol drones operating within fixed parameters.
Agentic systems go further, they perceive environments, reason about situations, formulate multi-step plans, and adapt strategies when conditions change. They don’t just execute tasks; they pursue goals with minimal human oversight.
This difference matters profoundly when we’re talking about weapons that can kill.
Five Key Insights
1. Agentic AI is Already on the Battlefield
This isn’t theoretical anymore. Ukraine is deploying drone swarms using software from a company called Swarmer where strike drones coordinate attacks and determine their own timing and sequence without direct human commands for the final action. One Ukrainian officer reported his unit used this technology more than 100 times, testing swarms of up to 25 drones.
The U.S. National Geospatial-Intelligence Agency’s Maven program is processing imagery to automatically detect targets, reducing targeting timelines “from hours to minutes”. The Marine Corps, Army, and NATO have all adopted Maven Smart System, with Pentagon contracts raised by nearly $800 million.
So what: We’ve crossed a threshold. These aren’t prototypes, they’re operational systems being deployed at scale.
2. The “Super-OODA Loop” Changes Everything
Military strategists talk about the OODA loop: Observe, Orient, Decide, Act. Agentic AI compresses this cycle from hours to minutes or seconds.
Militaries are using Maven to achieve “1,000 high-quality decisions, choosing and dismissing targets on the battlefield, in one hour”. One Army unit matched the efficiency of 2,000 personnel from Operation Iraqi Freedom using just 20 people.
What this means: When decisions happen at machine speed, humans struggle to maintain meaningful oversight. The acceleration itself becomes a strategic capability and a profound risk.
3. The Accountability Gap is Real
When an autonomous weapon commits a war crime or kills civilians, who’s responsible? The programmer? The officer who deployed it? The manufacturer? The machine itself?
This “responsibility gap” challenges international humanitarian law, which assumes human agents make targeting decisions. The international debate has coalesced around “meaningful human control”, humans, not algorithms, should control lethal force.
But here’s the paradox: militaries adopt agentic systems precisely because they outpace human reaction times. Insisting on human decision-making negates the operational advantage. Yet removing humans entirely raises profound moral concerns.
The tension: We want the speed of machines and the judgment of humans but we can’t have both.
4. Major Powers Are Racing Ahead
DARPA’s Air Combat Evolution (ACE) conducted the first AI-vs-human dogfights with an autonomous F-16 in September 2023. The program has evolved into tactical multi-ship operations beyond visual range.
The Air Force’s Collaborative Combat Aircraft (CCA) aims to field 1,000 “loyal wingman” drones, AI-driven jets flying alongside crewed fighters or operating independently.
China’s DeepSeek integration is powering autonomous military vehicles, drone swarms, robot dogs, and command centers pursuing “algorithmic sovereignty” independent of Western technology.
The Replicator Initiative seeks to field “multiple thousands of attritable autonomous systems” across domains within 18-24 months.
This isn’t an arms race in the traditional sense. It’s a race to build the infrastructure for algorithmic warfare, the software platforms, decision architectures, and human-machine teaming doctrines that will define 21st-century conflict.
5. We’re Redesigning Command Itself
Agentic AI forces militaries to rethink command structures that have remained stable since Napoleon. The Pentagon established the Algorithmic Warfare Cross-Functional Team (Project Maven) in 2017, then consolidated efforts into the Chief Digital and Artificial Intelligence Office (CDAO) in 2022.
These aren’t just org chart changes, they reflect recognition that agentic warfare requires new institutional structures, doctrines, and ways of thinking about leadership.
If AI agents become decision-makers within military hierarchies, what does “command” now mean?
The Companies Building the Infrastructure
While researching, I mapped the key platforms enabling multi-domain autonomous operations. This is just a small snapshot. I do a lot of systems and partnership mapping to see the trajectory of capabilities.
Anduril’s Lattice: Software for “affordable mass,” enabling teams of robotic assets to work together under human supervision
Shield AI’s Hivemind: Autonomy software for GPS- and communications-denied environments, demonstrated on drones and F-16s
Palantir’s Maven Smart System: Data fusion across classification levels for targeting intelligence
Ghost Robotics: Quadrupedal “robot dogs” for security, reconnaissance, and combat support
These companies aren’t just building products; they’re building the operating systems for autonomous warfare. Especially Anduril, I follow them on LinkedIn. Their narrative has shifted from building autonomous systems to building an autonomous ecosystem. Their posts show a coordinated push to own both the technology stack and the cultural story of agentic defence. Don’t tell them I said this.
Their partnerships make the trend even clearer. General Dynamics (combat vehicle radar Spark), Raytheon (HLG propulsion), and Overland AI (air-ground coordination) all point to real integration across air, land, and maritime autonomy layers. These aren’t testbed collaborations anymore, they’re building plug-and-play autonomy modules for live use. The Ghost Shark (undersea) and Barracuda (long-range precision fires) systems show that modularity and interoperability are now the product, not just the architecture.
The Escalation Risk
Research by RAND found that “the speed of autonomous systems did lead to inadvertent escalation in wargames”. Even the U.S. National Security Commission on AI acknowledged “unintended escalations may occur” when systems fail or interact in untested ways.
The problem intensifies with nuclear command and control. AI integration could accelerate warfare to the point where “commanders are likelier to trust computer readouts or judgements, and less likely to interrogate or reject them”.
Autonomous systems are designed to be unpredictable to stay ahead of enemy systems. Useful feature except when unpredictability + speed + lethal force = catastrophic event.
The International Response (Or Lack Thereof)
In December 2024, the UN General Assembly adopted a resolution on lethal autonomous weapons with 166 votes in favor, mentioning a potential “two-tiered approach to prohibit some systems while regulating others”. Over 30 countries have called for bans.
But major military powers continue developing and deploying increasingly autonomous systems.
International law is moving at diplomatic speed while technology deploys at machine speed. The gap is widening.
This research raises more questions:
What governance frameworks could slow deployment without disadvantaging democratic militaries?
How do we design “meaningful human control” that preserves operational advantage while maintaining accountability?
What does algorithmic sovereignty mean for smaller nations and allies like Canada?
How will agentic AI reshape coalition warfare and NATO interoperability?
I’m also thinking about how autonomy is changing defence economics when advantage comes from production speed, software integration, and adaptability, not just expensive platforms.



Hey, great read as always. Your clear articulation of agentic AI's qualitative difference and its current operational deployment truly highlights the profound shift underway in military technology. I'm particulary interested in how you see the legal and ethical frameworks evolving to address systems thats determine their own timing and sequence, especially with that minimal human oversight.