Pre-trench Foresight
A systems view for the AI era
World War I wasn’t caused by a bad decision in 1914.
By the time the assassination happened, the war was already decided, not in outcome, but in shape.
What followed wasn’t a failure of leadership or imagination.
It was the logical outcome of systems that had already locked decision-makers into a narrow corridor of action.
What actually happened in WWI (systems view)
By 1914, agency had already shifted away from people and into infrastructure.
The lock-in mechanisms
Railways
Mobilization plans were synchronized to the minute. Once started, they could not be stopped without cascading chaos. Political leaders lost agency to logistics.Alliance commitments
Mutual defence treaties functioned like if–then logic gates. One trigger propagated automatically across the system.Industrialized weapons
Machine guns, artillery, and mass conscription favored defence over maneuver, creating stalemate rather than decisive movement.Organizational doctrine
Armies trained for offensive glory, not industrial-scale attrition. Doctrine lagged material reality.
By the time humans realized the system dynamics were wrong, the system could no longer be reversed.
Five years of trench warfare wasn’t a choice.
It was the equilibrium state of the system they had already built.
This is the reusable logic:
Early technical and organizational decisions
Become embedded in infrastructure
Infrastructure constrains future choices
Humans interpret constraints as “reality”
Agency collapses downstream
Does this sound uncomfortably familiar in how AI systems are being built today?
What’s happening now with AI (systems view)
By the mid-2020s, agency is already shifting away from people and into systems.
The lock-in mechanisms
Workflow embedding
AI is being integrated directly into everyday tools and processes. Once embedded, removing it becomes more disruptive than tolerating known flaws. Human work reorganizes itself around system behavior.
Data architectures
What gets captured, labeled, or logged defines what can be seen and acted on. What isn’t measured stops counting. Decisions follow the shape of the data, not the situation.
Speed incentives
Systems reward faster responses and penalize hesitation. Escalation thresholds lower. Deliberation becomes a liability rather than a virtue.
Delegated judgment
Systems generate recommendations; humans approve them. Over time, approval replaces reasoning. Responsibility diffuses while outcomes remain real.
By the time people notice that judgment, authority, and accountability have shifted, the system dynamics are already difficult to shift.
The changes in how decisions are made aren’t a policy choice.
They’re the equilibrium state of the systems we’re already building.
The uncomfortable truth
The most important AI decisions are no longer being made where we think “decisions” happen.
They’re being made upstream:
in system architecture
in defaults
in procurement language
in what is automated versus what remains human
in what becomes too expensive to undo later
Why This Matters (Foresight Before the Lock-In)
This is pre-trench foresight.
Not predicting AGI.
Not ethics checklists.
But asking:
Where are we locking ourselves into irreversible dynamics?
Where will future leaders inherit systems they cannot escape?
What looks like efficiency today but becomes strategic immobility tomorrow?
That’s not abstract foresight.
In 2026, I’m looking closely at what real-world AI use is already normalizing, the new behaviors emerging around it, what we’re missing, and the system choices taking shape, because people may adapt their habits long before organizations adapt their rules.

