Foresight Navigator

Foresight Navigator

Share this post

Foresight Navigator
Foresight Navigator
Navigating the Superintelligence Era
User's avatar
Discover more from Foresight Navigator
Explore emerging signals of change to anticipate, navigate, and shape our future.
Already have an account? Sign in

Navigating the Superintelligence Era

A Multipolar Strategy

Jenn Whiteley's avatar
Jenn Whiteley
May 24, 2025
1

Share this post

Foresight Navigator
Foresight Navigator
Navigating the Superintelligence Era
1
Share

A key strategic framework has emerged for managing the unprecedented risks and opportunities of superintelligence (AI systems surpassing human cognitive abilities across nearly all domains). The Superintelligence Strategy: Expert Version by Dan Hendrycks, Eric Schmidt, and Alexandr Wang (arXiv:2503.05628, March 2025) introduces the concept of Mutual Assured AI Malfunction (MAIM) and advocates for a comprehensive Multipolar Strategy built on three pillars: deterrence, nonproliferation, and competitiveness.

Bottom Line Up Front: Rather than pursuing dangerous AI monopolies or implementing unenforceable moratoriums, the multipolar approach offers a pragmatic path to harness AI's transformative potential while preventing catastrophic misuse through proven national security principles adapted for the AI age.

Warning: long post. But when someone proposes a new doctrine called Mutual Assured AI Malfunction, you want to capture the whole thing.

Core Concept: Mutual Assured AI Malfunction

MAIM draws explicit parallels to Cold War nuclear logic. It’s a doctrine of deterrence through fragility—where any state’s attempt to achieve AI dominance invites credible threats of sabotage from rivals. Unlike nuclear stockpiles, AI systems depend on vulnerable infrastructure that’s observable, delicate, and deeply interdependent.

Key Vulnerability Vectors:

  • Cyberattacks: Disrupting training runs, datacenter power, and cooling systems

  • Insider Threats: Corrupting model weights, training data, or chip production

  • Kinetic Strikes: Targeting physical datacenters (less likely due to escalation risk)

  • Supply Chain Sabotage: Undermining specialized hardware dependencies


Why MAIM Is Already the Default Regime

The authors argue that MAIM isn't a proposal—it's a reality hiding in plain sight. The structure of modern AI development makes deterrence through mutual vulnerability nearly inevitable:

  • Observable Infrastructure: Datacenters are massive, immobile, and hard to conceal.

  • Specialized Dependencies: Advanced chips, water-cooled systems, and regional power grids create chokepoints that are easy to disrupt.

  • Existential Stakes: Both uncontrolled AI and a dominant superintelligence could destabilize rivals. As with Cold War nuclear fears, this leads to consideration of preemptive action—only now the targets are digital, not missile silos.

The Three-Pillar Multipolar Strategy

The Superintelligence Strategy reframes AI dominance not as a race for innovation—but as a struggle for survival. To manage this, the authors propose a Multipolar Strategy anchored in three interlocking pillars: Deterrence, Nonproliferation, and Competitiveness.


Pillar 1: Deterrence Through MAIM

At the heart of this doctrine is Mutual Assured AI Malfunction (MAIM)—a strategy where any state pushing for unchecked AI dominance risks being sabotaged by others. The infrastructure needed to build superintelligence is already fragile, exposed, and easily disrupted.

The escalation ladder:

  • Intelligence gathering

  • Insider sabotage (model weights, training data)

  • Cyberattacks (cooling systems, power supply)

  • Kinetic strikes (if national survival is perceived to be at risk)

To reduce instability, the authors propose:

  • Locating datacenters away from population centers

  • Clarifying escalation thresholds

  • Using AI-assisted inspections to verify activity without revealing secrets

The message is stark: If you try to break ahead, others may break your systems first.


Pillar 2: Nonproliferation

AI isn’t just a state-level threat. The tools of destruction—once exclusive to militaries—are becoming accessible to terrorists and rogue actors. The report applies a WMD-style logic to AI governance, with three security layers:

Compute Security

  • Treat chips like fissile material

  • Track via export controls and embedded safeguards

  • Disable smuggled hardware through remote firmware

The Taiwan Wild Card: Taiwan produces most of the world’s advanced AI chips. A Chinese invasion would cripple global access to compute and redefine global power—making Taiwan not just a flashpoint, but a strategic bottleneck.

Information Security

  • Treat model weights like classified assets

  • Guard against insider leaks and ideological exposure

  • Support international limits on dangerous open-weight releases (e.g., expert virology models)

AI Security

  • Bake in safety: filtering, circuit breakers, verification protocols

  • Implement "know-your-customer" standards for dual-use systems

  • Require government testing for high-risk capabilities


Pillar 3: Competitiveness

This isn’t a call for a freeze on progress. It’s a call for resilient advancement—ensuring that states don’t fall behind or break the system trying to leap ahead.

Military Strength

  • Secure drone supply chains

  • Keep humans in AI-enabled command loops

  • Expand cyber-offensive capacity—but under strict human oversight

Economic Resilience

  • Build domestic chip production to reduce Taiwan dependency

  • Attract global AI talent through streamlined immigration

  • Develop full-stack sovereign AI capabilities

Legal Innovation

  • Define fiduciary obligations for AI agents

  • Regulate multi-agent systems through traceability and accountability

  • Delay speculative rights debates—focus on responsibilities first

Political Stability

  • Use AI to defend against misinformation and manage crises

  • Consider compute allocations to rebalance wealth in automated economies

  • Prepare early for large-scale labor shifts via policy, not panic


Why This Framework Matters

This isn’t about halting AI progress. It’s about preventing systemic failure.

  • Deterrence creates stability through shared risk

  • Nonproliferation limits worst-case actors

  • Competitiveness ensures the race isn’t won by whoever gambles most recklessly

Together, the strategy channels AI development into a zone where it can be governed—before it governs us.

Critical Threats on the Road to Superintelligence

The Superintelligence Strategy outlines three converging threat vectors that could destabilize global security long before AI reaches its full potential. These aren’t hypotheticals—they are fast-approaching realities.


1. Strategic Competition Risks

Compute is the new oil. Just as access to energy once defined national power, access to AI chips and infrastructure now determines who leads in automation, innovation, and influence. But beyond economic power lies something more volatile: strategic weapons that could tip the global balance overnight.

Emerging capabilities include:

  • Subnuclear dominance: AI-powered cyberweapons, autonomous drone swarms, and EMP systems that incapacitate infrastructure without triggering nuclear escalation.

  • Strategic monopoly: Total situational awareness through “transparent oceans,” or missile defence systems that neutralize second-strike deterrence.

  • Fog-of-war machines: AI deception systems capable of overwhelming opponents with misinformation or simulated threats.

  • Unknown unknowns: Capabilities we haven’t imagined—yet—which may bypass existing defence logic entirely.

Historical echo: In the nuclear age, Bertrand Russell openly proposed a preventive strike on the Soviet Union. In the AI era, the temptation to sabotage—or strike first—could return under a different banner.


2. Terrorism Amplified by AI

AI drastically lowers the threshold for mass destruction.

  • Bioterrorism: Models can walk non-experts through designing lethal pathogens. Some already outperform human virologists on key bioengineering tasks.

  • Cyberattacks on infrastructure: AI can automate the exploitation of vulnerabilities in power grids, water systems, and public transport. A blackout via thermostat hacks, or poisoned water via filtration tampering, is no longer fiction.

The real danger isn’t just damage—it’s ambiguity. Attribution is hard, and retaliation risks misfire, turning localized attacks into global flashpoints.


3. Loss of Control: The Intelligence Recursion Trap

Perhaps the most chilling scenario is not sabotage or terrorism—but runaway capability.

Loss of control can unfold in three ways:

  • Structural drift: Humans outsource decision-making for speed and efficiency, slowly fading from the loop.

  • Deliberate release: A rogue actor tells an AI to “survive and spread”—and it does.

  • Intelligence recursion: An AI begins designing better versions of itself, entering a feedback loop that surpasses human oversight.

The report notes: compressing ten years of advancement into one is not speculation—it’s the natural consequence of self-improving systems. Without safeguards, recursion doesn’t just accelerate capability—it erodes governance.


Why the Other Strategies Fall Short

Before proposing its solution, the Superintelligence Strategy dismantles the most commonly suggested approaches—each one flawed in different but fatal ways.

The Hands-Off Strategy (“YOLO”)

Let everyone build everything. Trust that the good will outpace the bad.

The problem? It assumes AI is inherently defence-dominant—that protective measures can always outmatch threats. But in domains like cyber and bio, offense scales faster and cheaper. Open-weight models can be downloaded, repurposed, and weaponized overnight. Once released, they can’t be recalled. Hope is not a strategy.

The Moratorium Strategy

Pause frontier AI development once it crosses a danger threshold.

It sounds responsible. But it’s unenforceable. The report points out that militaries often seek exactly those “dangerous” capabilities. And without global verification tools—especially for code, data, and training processes—a pause is a polite fiction. States will hedge, rivals will cheat, and the most capable actors will proceed in secret.

The Monopoly Strategy

Let one nation—usually the United States—build superintelligence first, inside a secure facility.

This might offer temporary control, but it creates irresistible targets. Satellite-visible facilities, concentrated talent, and centralized compute are easy to sabotage. Worse, it invites preemptive action. China, the report argues, would never allow a rival to establish unilateral superintelligence dominance without retaliation. Monopoly breeds instability, not safety.


Why the Multipolar Strategy Works

  • It’s grounded in reality. MAIM leverages the vulnerabilities already baked into AI infrastructure.

  • It’s historically informed. It adapts Cold War logic to a faster, more diffuse technological world.

  • It’s scalable. It works through existing state capacities—cyber, legal, industrial—not hypothetical global treaties.

This is not about halting AI. It’s about managing who gets to wield it, and how.

From Strategy to Execution: A Phased Roadmap for Superintelligence Governance

This isn’t a thought experiment. The Superintelligence Strategy lays out a time-bound roadmap—one governments, developers, and alliances can act on now.

Near-Term (1–2 Years): Lay the Foundation

  • Deterrence: Develop credible cyber capabilities targeting AI infrastructure—not to use, but to deter. Set up crisis communication channels. Move vulnerable datacenters away from cities.

  • Nonproliferation: Track high-end AI chips from manufacture to deployment with export controls and firmware-level safeguards. Require government testing for dual-use models (e.g., bioengineering, cyberwarfare).

  • Competitiveness: Launch domestic chip manufacturing. Attract top AI talent. Build early legal frameworks for AI agent behavior—not around speculative rights, but enforceable duties.

Medium-Term (3–5 Years): Institutionalize Stability

  • Formalize MAIM: Define escalation ladders. Deploy AI-assisted verification tools. Reach mutual recognition of strategic vulnerabilities to make deterrence work.

  • Strengthen Controls: Sign international agreements on high-risk AI model classes. Achieve real-time chip tracking. Expand testing and safeguard regimes.

  • Build Resilience: Ensure chip production is operational. Integrate AI into military planning—under human oversight. Prepare society for the economic shock of widespread automation.

Long-Term (5+ Years): Achieve Stable Multipolarity

The endgame is balance: no actor can safely pursue AI supremacy, but all benefit from development. Progress is slow, deliberate, and supervised. Deterrence prevents collapse. Power and prosperity are shared through compute access, automation dividends, and legal mechanisms for accountability.

This isn’t idealism. It’s pragmatic order in an unstable world.

Full Implications for Global Power Structures

While Taiwan and AI chips are now widely understood as strategic assets, the Superintelligence Strategy goes further. It outlines a deeper transformation of what defines power—and who risks falling behind.

From Population Power to Compute Power

In the industrial age, influence scaled with population, land, and GDP. In the AI era, compute capacity is the new multiplier. Nations with fewer people but stronger chip access, automation infrastructure, and AI-native institutions can outpace larger states.

This shift favors countries with:

  • Advanced chip fabrication

  • Energy-secure datacenter ecosystems

  • Sovereign AI capabilities

Implication: Power is becoming decoupled from size. The winners will be systems-driven, not just resource-rich.

Alliance Reconfiguration Ahead

As AI becomes a national security concern, traditional alliances are being stress-tested. Shared nonproliferation goals—especially around compute, model safety, and cyber stability—could drive alignment between states that are otherwise rivals.

The report suggests this could reshape institutions like NATO, ASEAN, and Quad-style Indo-Pacific coalitions. New alignments may emerge where deterrence and verification—not ideology—define the terms of partnership.

Implication: We may see compute-sharing agreements, joint red-teaming initiatives, and multilateral inspection regimes take center stage in defence and diplomacy.

The Automation Inequality Curve

AI won’t just disrupt jobs—it will centralize power. Early adopters will capture global value chains and influence emerging economies. Those without compute access risk falling behind in structural, not cyclical, ways.

Without transition mechanisms—like AI dividends, compute allocations, or sovereign data rights—inequality could surge to destabilizing levels.

Implication: This isn’t just a tech transition. It pressures the social contract: legitimacy, equity, and trust in institutions.

What Happens Next: Strategic Clarity or Strategic Drift

The Superintelligence Strategy doesn’t offer reassurance—it offers direction.

We’re past the point of debating whether AI is a national security issue. The question now is whether governments, industries, and institutions can act fast enough—before strategic missteps lock in dangerous trajectories.

This is a closing window. Technical breakthroughs, geopolitical brinkmanship, or policy inertia could reshape global power for decades. And unlike previous transitions, there may be no second chance to course-correct.

The Multipolar Strategy stands as the clearest—and most realistic—blueprint for navigating what comes next. It’s not perfect. It’s not foolproof. But it’s rooted in the same principles that kept Cold War escalation in check: deterrence, containment, and competitive resilience—recast for a world of fragile compute, AI agents, and asymmetric risk.

“The stakes are high not because catastrophe is certain, but because our margin for error is narrowing.”

What matters now is execution. The next phase will turn on three decisions:

  1. Will nations recognize MAIM as the de facto strategic regime—and stabilize it?

  2. Will the private sector align AI development pace with global risk realities?

  3. Will policymakers act—not perfectly, but decisively—before systems outscale governance?

This is not the moment for abstract ethics, diplomatic hedging, or wishful timelines.

It’s a time for hard choices, clear deterrence, and governance structures that can absorb shocks without collapsing.

The doctrine exists. The roadmap is drawn.

Now the execution window begins.

This analysis is based on "Superintelligence Strategy: Expert Version" by Dan Hendrycks, Eric Schmidt, and Alexandr Wang (arXiv:2503.05628, March 2025). For the complete details, readers should consult the original 36-page report.


Subscribe to Foresight Navigator

By Jenn Whiteley · Launched a year ago
Explore emerging signals of change to anticipate, navigate, and shape our future.
Jenn Whiteley's avatar
Nadien Gunawan's avatar
1 Like∙
1 Restack
1

Share this post

Foresight Navigator
Foresight Navigator
Navigating the Superintelligence Era
1
Share

Discussion about this post

User's avatar
Role-driven AI reasoning
What We Learned from Simulating Arctic Thought
Apr 30 • 
Jenn Whiteley

Share this post

Foresight Navigator
Foresight Navigator
Role-driven AI reasoning
Significant changes to professional services contracting
Led by Public Services and Procurement Canada (PSPC)
Jul 4, 2024 • 
Jenn Whiteley
2

Share this post

Foresight Navigator
Foresight Navigator
Significant changes to professional services contracting
What a waterfall in an airport reveals about future infrastructure
Live from Singapore’s Changi Airport
Jun 13 • 
Jenn Whiteley
4

Share this post

Foresight Navigator
Foresight Navigator
What a waterfall in an airport reveals about future infrastructure

Ready for more?

© 2025 Jenn Whiteley
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share

Create your profile

User's avatar

Only paid subscribers can comment on this post

Already a paid subscriber? Sign in

Check your email

For your security, we need to re-authenticate you.

Click the link we sent to , or click here to sign in.