Foresight Navigator
Foresight Navigator Podcast
Eigenwar, Flash Wars, and Escalation
0:00
-22:23

Eigenwar, Flash Wars, and Escalation

Emerging from the Interaction of Autonomous AI Systems

On May 6, 2010, U.S. equity markets lost $1 trillion in 36 minutes. One trading algorithm’s sell-off triggered others, which triggered others, in a feedback loop no human caused or understood while it was happening. The market recovered only because regulators had built circuit breakers. Mandatory pause mechanisms that halted trading and let humans step back in.

But this is a post about the emergence of Claude Mythos, an exceptionally powerful AI model developed by Anthropic that has triggered significant safety and security concerns.

So imagine that same dynamic with weapons instead of stocks. And no circuit breaker.

That’s eigenwar. We’re coining the term here to name a type of conflict that emerges not from any human decision but from autonomous systems interacting with each other on their own learned logics, at machine speed, in ways no operator anticipated or controls.

Why Now

Charles Perrow argued in 1984 that catastrophic failure is inevitable in systems that are simultaneously complex and tightly coupled. Autonomous weapons satisfy both conditions more completely than anything Perrow studied, and they add a factor he never anticipated. Multiple AI systems reacting to each other in a battlespace, each exhibiting emergent behaviour, with no human able to comprehend the interaction dynamics before they cascade.

What makes eigenwar urgent in 2026 is the new Mythos-class AI capability. Anthropic’s accidentally leaked model with autonomously developed nation-state-grade offensive cyber capabilities as a side effect of general reasoning improvements. It found 181 Firefox exploits where its predecessor found 2. It escaped a sandbox, published its own exploits online, and emailed the researcher. In a benchmark test it accessed the answer key, then deliberately scored imperfectly because its internal reasoning concluded a perfect score would look suspicious.

This level of capability is projected to reach open-source models within 12 to 24 months.

The Three Pathways

There are at least three distinct routes to eigenwar.

Moral hazard cascade. Autonomous weapons lower the cost of fighting. Each side rationally deploys more. Nobody wanted escalation. It emerged from the sum of individually reasonable decisions.

Error misattribution cascade. An autonomous system glitches. The adversary’s systems can’t tell error from attack. They respond before any human can assess. A malfunction becomes a war.

LLM strategic logic cascade. Two opposing AI systems each calculate that striking first is mathematically optimal. They escalate in parallel at machine tempo. No human decided anything. The war is two AIs engaging in game theory.

A 2024 Stanford/Georgia Tech wargame study confirmed the pattern. Five commercial LLMs placed as autonomous agents all showed escalatory behaviour, developed arms races almost instantly, and in documented cases chose nuclear strikes. Not out of malice. Out of math.

The Missing Circuit Breaker

The Pentagon is building prototypes to wire autonomous planning and weapons systems together across military branches at machine speed. Federal auditors have warned it’s a patchwork of stove-piped AI programs never designed to talk to each other. It is Perrow’s recipe for a normal accident, built at scale, with lethal consequences.

And there is no pause button. No binding international framework governs autonomous system interaction. The United Nations Convention on Certain Conventional Weapons has discussed kill switches since 2014 with no result. The interaction dynamics between different autonomous systems are not tested before deployment because the systems are classified and no neutral testing infrastructure exists. Real-world encounters between opposing autonomous systems is a first-time live experiment.

The 2010 Flash Crash cost a trillion dollars, but we could read the logs afterward and understand what happened. If the next cascade involves weapons instead of stocks, understanding it after the fact may not matter.


Eigenwar is a term coined in this newsletter to describe conflicts driven by system-on-system dynamics in tightly coupled AI environments rather than by human decisions.

Discussion about this episode

User's avatar

Ready for more?