When Decisions Don’t Exist, Tools Don’t Matter
Why tool sovereignty doesn’t fix decision failure
When France says it’s “switching out Microsoft tools,” the problem isn’t the tools.
The French government just announced it will phase out widely used U.S. collaboration tools like Microsoft Teams and Zoom in favour of its own sovereign platform called Visio, rolling out across ministries by 2027 as part of a push for digital sovereignty and security.
And that’s the signal, but it’s not the real issue.
The real problem is that decisions are made and then disappear into tools.
They’re smeared across:
emails
slides
documents
meetings
Switching platforms doesn’t fix that.
It just moves the mess.
The real fix is boring, and powerful:
Create a decision object.
A place where every real decision is explicitly captured.
A structured record that holds the decision question, ownership, time horizon, constraints, options, assumptions, and conditions.
Emails, slides, and meetings feed that decision
they don’t become it.
Only once decisions exist as objects can AI do anything useful with them.
Until then, AI will keep sounding smart
and staying operationally irrelevant.
This isn’t just a France problem.
But What If the Decisions Themselves Are the Problem?
It’s too easy to say,
“Capture decisions better,”
without asking a harder question:
What if the decisions are wrong?
What if they’re:
politically constrained
risk-averse by design
consensus-driven to the point of paralysis
structurally misaligned with reality
Wouldn’t capturing them more cleanly just harden bad judgment?
That’s a serious challenge.
So let’s red-team it. (test it)
Red team #1: “If the decisions are wrong, capturing them just locks in failure”
Challenge:
Formalizing decisions risks:
legitimizing poor assumptions
freezing consensus too early
giving a false sense of rigor
making bad calls harder to reverse
Response (important):
This only holds if the decision object is conclusion-first.
A better model is constraint-first and assumption-explicit.
A real decision object does not say:
“Here is the decision.”
It says:
here are the assumptions
here is what we don’t know
here is what would change our mind
here are the failure modes
That does the opposite of locking in error.
It exposes fragility.
Bad decisions survive because they’re informal, implicit, and socially protected.
Red team #2: “Power, not structure, determines decisions anyway”
Challenge:
In governments:
decisions are political
power dynamics dominate
formal logic won’t override ministers, allies, or risk aversion
So why bother?
Response:
Correct, structure does not override power.
But it does two critical things:
It makes power visible
who can block
where authority actually sits
which constraints are real vs performative
It changes post-decision accountability
assumptions are on record
trade-offs are explicit
“we didn’t know” becomes falsifiable
Right now, power wins and hides.
Decision architecture doesn’t depoliticize decisions
it prevents plausible deniability.
That’s why it’s resisted.
Red team #3: “Organizations already ‘decide’, this adds bureaucracy”
Challenge:
This sounds like:
another form
another template
another process layer
In risk-averse systems, that kills momentum.
Response:
This only happens if:
the decision object is heavy
it tries to replace existing workflows
This approach does neither.
A decision object is lighter than the current mess:
fewer slides
fewer email chains
fewer re-litigation cycles
It removes bureaucracy by collapsing:
many artifacts → one decision spine
That bureaucracy today is hidden, not absent.
Red team #4: “What if the real problem is incentives, not decisions?”
Challenge:
People already know the right move but:
incentives punish action
careers reward caution
ambiguity protects everyone
So clearer decisions won’t change behavior.
Response:
This is the strongest critique, and it’s partially true.
But here’s the key distinction:
Incentives block action.
Ambiguity protects incentives.
Decision architecture doesn’t fix incentives.
It removes ambiguity as cover.
That’s why:
decisions stay fuzzy
assumptions stay unstated
responsibility stays diffuse
This system doesn’t force action
it forces honesty about why action isn’t taken.
That’s a material shift.
Red team #5: “Maybe the real issue isn’t decisions, it’s learning”
Challenge:
Governments don’t learn fast enough.
They repeat mistakes.
Decision capture doesn’t equal learning.
Response:
Correct, unless decisions are:
versioned
revisitable
compared against outcomes (key)
Right now:
decisions dissolve into documents
outcomes aren’t linked back
lessons become narratives
A real decision object allows:
post-hoc evaluation
assumption failure tracking
institutional memory that isn’t folklore
Without decision capture, learning is impossible.
The red-team conclusion (this is the point)
The problem is not “decisions” in isolation.
The problem is this:
Decisions are made under constraint,
shaped by power,
based on assumptions,
and then lost to artifacts that protect everyone from accountability.
Decision architecture doesn’t guarantee good decisions.
It does something more fundamental:
It makes bad decisions impossible to hide.
The issue isn’t that organizations make the wrong decisions.
It’s that they make decisions in forms that can’t be examined, challenged, learned from, or reasoned over by AI.
The solution isn’t different tools; it’s a decision layer that captures decisions as explicit records instead of letting them dissolve into meetings and documents.


