Your AI Strategy Has No Contract With Reality
- Jeroen Janssen

- Jan 1
- 5 min read
Updated: 9 hours ago
This is part 1 of a series on AI strategy governance.
There's a fascinating paradox emerging in boardrooms around the world, and alm
ost nobody is talking about it.
99% of senior executives call AI a top priority. Budgets are rising. Chief AI Officers are being appointed. The future is clear, they say, and those who don't get on board are already too late.
Meanwhile, companies are laying off people based on what AI could do — not what it does. A survey of over a thousand executives reveals that just 2% have actually restructured based on proven AI implementation.
The rest are acting on anticipation. On belief. On what a competitor announced in a press release. Klarna cut its workforce in half, only to start rehiring because quality collapsed. Duolingo fired translators in favor of AI and reaped a public backlash.
This is not a technology problem. This is a decision-making problem.

The Core Mistake
The core mistake in most AI initiatives is simple to articulate: companies buy capability — models, infrastructure, licenses — but sell value internally without designing the measurement chain. Then a predictable pattern emerges. The promise rises faster than the evidence. Pilots proliferate: lots of activity, little integration. Management sees momentum, operations feel friction. And bad signals never reach the top, because nobody wants to be the one carrying the message "it's not working."
The market, by the way, is not impressed. When Microsoft reported quarterly results in late January that came in just below expectations — Azure grew 39% instead of the 39.4% consensus — $357 billion in market cap evaporated in a single day. UBS put it in writing: "Microsoft needs to prove that these are good investments." The market doesn't punish AI. The market punishes capex without proof.
Three Curves, Not One
What's missing from virtually every AI business case is the understanding that AI value is a socio-technical system. You don't manage one curve — you manage three, in parallel.
The value curve: cost, cycle time, quality, revenue. This is what most dashboards track.
The adoption curve: actual usage, skill levels, process fit. This is what most dashboards don't track.
The trust curve: psychological safety, error-reporting behavior, willingness to escalate, perceived fairness. This is what almost nobody tracks.
And here's the crux. Researchers at Columbia University had two-person teams play a video game. Midway through, some teams had a human member replaced by an AI agent. An agent that individually outperformed the human. The result? Teams with AI performed worse. Not because of the technology. But because the remaining human lost motivation. The trust curve broke. And 84% of participants simply wanted their human teammate back.
Harvard professor Amy Edmondson confirms the underlying mechanism: without psychological safety, people stop reporting, correcting, and learning. Value then becomes structurally unmeasurable not because it isn't there, but because the system that's supposed to realize that value has quietly seized up.
93% of executives in a major annual survey acknowledge that the human dimension — culture, change, adoption — is their biggest challenge with AI. Not technology. But how many of that 93% spend the lion's share of their budget and attention on precisely that human dimension? Almost none. The budgets go to licenses and data centers. Not to people's ability to say, honestly: this isn't working.
The J-Curve Nobody Explains
There's another complication that boards need to understand: the productivity dip. Every serious AI implementation begins with a dip. Processes need to be redesigned. People need to learn. Friction comes before gains.
This is the J-curve, and it's well documented empirically. Down first, then up.
The problem is that virtually no organization explains this dip to its own board. And so the dip is read as failure. The pilot gets killed at precisely the moment when pushing through would have been the right call. Or worse, the dip gets ignored, the numbers get polished, and the board is presented with a success story that doesn't match operational reality.
What Actually Works
The answer already exists. Harvard Business Review makes the case for a portfolio approach to AI investments: not everything at once, but structured, with clear phases and kill criteria. That's a good start. But portfolio management alone is a dashboard. It only becomes operational when you attach hard gates to it.
For each AI initiative, a board should be able to answer five questions:
Why does this exist? Strategic fit.
What do we actually know, and what are we assuming? Evidence status.
Is it in the value stream, or is it a demo? Integration level.
What is it doing to the team? Human impact; dynamics, roles, safety.
What are the risks and how are we managing them? Compliance, model risk, audit trail.
If you can't answer these questions, you don't have an AI strategy. You have a budget.
Speed Is Not a Style. It's a Governance Variable
The last misconception I want to address is speed. In the boardroom, speed is treated as a style choice: we move fast, we move slow, we're ambitious, we're cautious.
But speed is not a style. Speed is: how much uncertainty do you accept per dollar and per process. In non-critical processes, you can iterate faster and tolerate more uncertainty. In critical processes, you need to tie speed to demonstrability. That's not caution. That's governance.
The Question
The question is not whether you're doing AI.
That question was answered long ago.
The question is whether your AI strategy has a contract with reality.
Whether you measure what you claim.
Whether you know when to stop.
Whether your people dare to say it's not working.
And whether, when things don't go as planned, you can explain the difference between a J-curve dip and a fiasco.
If you can do that, you have an AI strategy.
If you can't, you have a bet.
And $357 billion says: the market is learning the difference.
Why Strategic Red Teaming
The pattern described above does not fail because organizations lack intelligence, talent, or intent. It fails because assumptions remain implicit, untested, and socially protected until reality enforces its own correction.
Strategic Red Teaming exists to surface those assumptions before they harden into sunk cost, reputational exposure, or irreversible commitments.
In practice, this means subjecting AI strategies, investments, and operating choices to structured adversarial scrutiny. Not to block progress, but to distinguish what is evidence-backed from what is belief-driven. To separate J-curve friction from genuine failure. And to make the implicit trade-offs between speed, risk, and trust explicit at decision level.
At Apparens, Strategic Red Teaming is applied as an operational discipline. Each initiative is examined across strategic intent, evidence status, integration into value streams, human impact, and risk posture. The objective is not to produce another report, but to create decision clarity. What to invest in. What to stop. What to adjust. And why.
The output is a decision document that allows boards and executives to act with eyes open. Not because assumptions disappear. Every strategy has them. But because they are named, challenged, and owned. That is the difference between managing AI as a belief system and managing it as a governed strategic capability.
The question is not whether your strategy contains assumptions.
Every strategy does.
The question is whether you know which ones.
Sources
Bean, R. & Davenport, T.H. (2026). Survey: How Executives Are Thinking About AI in 2026. Harvard Business Review
Davenport, T.H. & Srinivasan, L. (2026). Companies Are Laying Off Workers Because of AI's Potential—Not Its Performance. Harvard Business Review.
Hoque, F., Nelson, E., Davenport, T. & Scade, P. (2026). Manage Your AI Investments Like a Portfolio. Harvard Business Review.
Martinez, J. (2024). When AI Teammates Come On Board, Performance Drops. Harvard Business Review.
Seth, J. & Edmondson, A.C. (2026). How to Foster Psychological Safety in the Age of AI. Harvard Business Review.
Novet, J. (2026). Microsoft lost $357 billion in market cap as stock plunged most since 2020. CNBC.

