This post idea emerged from a foresight coffee chat I joined yesterday. I’ll share more once their paper is published, but one powerful idea from the informal discussions surfaced: what if fear isn’t a barrier to cooperation, but the starting point for building it? If we’re serious about avoiding worst-case outcomes in Artificial Intelligence (AI), we may need to start treating fear as a strategic input. What if fear isn’t something to eliminate—but something to design around?
Fear in the AI space today is defensive and isolating. It drives secrecy, accelerates competition, and pushes actors—governments, companies, and individuals—toward unilateral action. It’s fear of falling behind, being out built, or losing control.
We often assume trust must come first. That shared values, common goals, and aligned interests are the necessary precursors to collaboration. But what if, in the case of advanced AI, it’s the fear of mutual catastrophe that creates the first real incentive to act together?
That fear—of systems spiraling beyond control, of unintended escalation, or of irreversible consequences—might not just be emotional residue. It might function as infrastructure.
Fear as Infrastructure
When we talk about infrastructure, we usually think of the visible layers—cables, servers, cloud networks. But behind those physical elements are less visible systems: protocols that route traffic, thresholds that trigger load balancing, feedback loops that detect failure and initiate recovery. These mechanisms aren’t designed for efficiency alone—they’re built for resilience.
In strategic systems—especially those involving AI development and deployment—fear can function like that hidden infrastructure. It’s a low-frequency signal in the system, often ignored, but essential when the system is under stress. It shows up not as panic, but as the collective realization of shared risk, prompting course correction, slowdown, or coordination.
If we design systems that treat fear as an input—something to detect, interpret, and act on—we stop thinking of it as emotional noise. We start treating it like a governance pressure valve or an early-warning system. Not to stop everything, but to prevent everything from breaking.
The goal isn’t to eliminate fear. The goal is to route it through the system in ways that support durability, adaptability, and survival under real-world conditions.
Precedents: When Fear Worked
We’ve seen this logic before.
Nuclear deterrence: During the Cold War, the U.S. and Soviet Union were locked in ideological opposition. Yet, they created backchannel communications, arms treaties, and mutually agreed-upon red lines. The logic wasn't moral; it was existential. The fear of total annihilation brought stability where ideology could not.
Climate emergency protocols: Although imperfect, the Paris Agreement and subsequent climate frameworks emerged not from shared values but from an understanding that climate disruption would spare no nation. The tipping point wasn’t optimism—it was risk accumulation.
Pandemic data sharing: During COVID-19, early reluctance gave way to cross-border sharing of genomic data, emergency authorizations, and global health alerts—not because systems were fully aligned, but because the cost of withholding was too high.
These examples offer a blueprint. In each, shared fear became the operational foundation for new agreements.
Artificial Intelligence: Different, but Not Entirely
The stakes with advanced AI are unique in scale and scope. A misaligned general-purpose model, autonomous military decision-making, or a failure in alignment systems could create outcomes that are not just dangerous but possibly irreversible. In these scenarios, even the most competitive actors are bound by one reality: they cannot afford a mistake.
Take automated escalation in military contexts. Imagine a future conflict in which AI-driven decision systems react faster than human oversight can intervene. A misread signal or a spoofed sensor input could lead to kinetic action—missile launches or drone strikes—before a diplomatic channel is even aware something is happening.
Or consider commercial competition. AI developers in different countries might race to release increasingly powerful models, cutting corners on safety testing to gain market share. But if a poorly aligned system is released too soon, the backlash would not stay local—it would destabilize the entire ecosystem, possibly triggering international bans, sanctions, or worse.
In these cases, trust won’t emerge from goodwill. It will emerge from necessity.
Fear as Coordination Strategy
This gives us an overlooked strategic insight: existential fear isn’t just an emotion to manage. It’s a shared boundary condition. It creates pressure that no actor—state, corporate, or independent—can ignore.
Rather than waiting for ideal alignment, AI governance could begin by formalizing the very thing everyone already feels: discomfort, anxiety, dread.
This could look like:
Pre-agreed pause protocols: mechanisms that allow for rapid global response if warning signs of model instability or systemic failure emerge.
AI hotlines: direct lines between major AI labs and national security offices, modeled after Cold War crisis communications, to de-escalate in case of uncertainty or misinterpretation.
Shared observatories: trusted third-party entities that monitor compute usage, model behavior, and cross-border risk indicators.
Minimum safety standards: universal red lines (e.g., don’t connect autonomous weapons to live decision-making systems without human oversight) that function like the Geneva Conventions of AI.
These mechanisms wouldn’t require deep trust. They would only require a shared assumption: that the alternative is unacceptable.
Moving Forward
This approach doesn’t require consensus on ethics, values, or strategy. It only requires recognition of a shared edge.
Foresight professionals are trained to search for weak signals, emerging actors, or tipping points. But one of the most reliable early signals might already be present: a distributed, low-frequency current of fear.
If treated seriously, that current could become the scaffolding of an entirely new era of coordination—one not born from harmony, but from a shared sense of what must never happen.
In this light, existential fear isn’t an obstacle to overcome. It’s infrastructure waiting to be formalized.