© Apparens 2026 | Privacy Policy
Fixing a Failing Strategy with Apparens
Jeroen Janssen — Founder
Apparens exists because I kept seeing the same pattern: in government, in media, in consulting, in enterprise IT.
Organizations make high-stakes technology decisions based on assumptions they never test. Strategies are approved, budgets are committed, transformations are launched, and the implicit beliefs underneath them are never exposed to adversarial pressure until markets, regulators, or operational reality do it first.
I founded Apparens to build a discipline for doing that deliberately.
Where this comes from
My day job is at the Dutch Tax Authority, where I work as a strategic advisor on AI, governance, risk, and compliance. It is one of the largest and most complex public IT environments in the Netherlands. Mission-critical systems, high regulatory pressure, and decisions that directly affect millions of citizens.
Before that, I led the IT department for Toeslagen at the Tax Authority, managed IT at the Dutch Public Broadcasting Corporation (NPO), and worked as a principal consultant at Quint Wellington Redwood. My career spans more than 25 years across public and private sector, always at the intersection of technology, strategy, and organizational reality.
What Apparens is
Apparens is where I develop and test Strategic Red Teaming. A structured, adversarial methodology for stress-testing strategies and AI investments against the conditions they will actually face.
I build this with agentic AI. The systems I've developed generate thousands of scenarios and hundreds of testable hypotheses from a single strategic position. Every assumption is surfaced. Every dependency is tested. Every claim is held to evidence.
Apparens is not a traditional consultancy. It is an applied research practice; an ongoing experiment at the frontier of what AI can do for strategic decision-making. What I learn here, I bring back to my work at the Belastingdienst. And once a year, I bring it to the classroom as a guest lecturer in AI strategy and governance at Saxion University.
Apparens operates independently of my role at the Belastingdienst. Client engagements, data, and outputs are strictly separated. All diagnostic work is subject to confidentiality agreements.
Apparens is the methodology and the discipline. Jeroen Janssen is the accountable principal. All engagements are conducted under strict confidentiality. No findings are shared, published, or referenced without explicit written consent. Apparens holds no vendor relationships, earns no implementation revenue, and has no financial incentive in the outcome of any diagnostic.
A note on independence
Apparens is an applied research practice. It exists to develop and stress-test a methodology, not to build a consultancy. Every engagement generates insight that sharpens the next. Every diagnostic tests the system as much as it tests the client.
This work is entirely independent of my role at the Belastingdienst. Separate clients, separate data, separate infrastructure. All engagements are subject to confidentiality agreements.
Apparens charges professional fees. Not despite being a research practice but because of it. Serious methodology requires serious conditions. Organizations that invest in adversarial testing bring real problems, real access, and real urgency. That is what produces real insight. Anything less produces theatre. Apparens works with organizations in regulated industries, public sector, and mid-market companies navigating AI-driven transformation. Sectors include government, financial services, critical infrastructure, and technology-dependent enterprises.
Diagnostic findings are delivered to the commissioning party under confidentiality. Apparens does not retain client-specific data after delivery. The purpose of the diagnostic is to inform board-level decisions, not to create regulatory exposure. Organisations that act on findings strengthen their position. The diagnostic provides the evidence base for that action.
What I believe
Strategy only works when it survives contact with reality.
Most never get that test. The assumptions harden. The momentum builds. And by the time the gap between belief and evidence becomes visible, the cost of correction is enormous.
Strategic Red Teaming is the discipline of finding that gap early — while you can still do something about it.
Background
MBA — University of Greenwich, UK Executive education — Nyenrode Business University Author — Iedereen Obeya: Strategisch verbonden en voorspelbaar verbeteren (2019) Guest lecturer AI strategy & governance — Saxion University
Why Red Teaming
Red teaming originated in military and intelligence communities as a method for challenging plans from an adversary's perspective before those plans meet reality. The principle is simple and powerful: if you want to know where your strategy breaks, assign someone the explicit task of breaking it.
In recent years, red teaming has gained traction in cybersecurity, AI safety, and financial stress testing. But it has rarely been applied where it arguably matters most — at the level of strategic decision-making. The boardroom, where the biggest bets are placed and the biggest assumptions go unchallenged, is precisely where adversarial scrutiny is most needed and least practiced.
At Apparens, I developed a methodology that brings red teaming to that level. It combines adversarial scenario analysis with agentic AI — systems that autonomously generate thousands of strategic scenarios, produce hundreds of testable hypotheses, and evaluate every assumption against documented evidence. The result is held together by what I call the Governance Envelope: a structured boundary that shows where an organization's strategy is evidence-backed and where it is operating on belief.
This methodology is not something I bought or licensed. I built it: iteratively, by working with the technology, testing it against real strategic questions, and refining it based on what actually surfaces useful insight and what doesn't. It is an ongoing practice, not a finished product. And that is intentional: the systems it tests keep changing, so the methodology must keep evolving with them.