TWINLADDER
TwinLadder
TWINLADDER
Back to Newsletter

Issue #26

When AI Training Isn't Training: The Gap Between What Companies Buy and What Article 4 Requires

The EUR 2.4 billion European AI training market is selling certificates, not competence. We review 31 programmes and find that 87% allocate less than 10% of time to AI limitations and failure modes. Five markers distinguish genuine competence-building from checkbox compliance.

AI Training
Article 4
Compliance
Competence
Training Market
March 14, 202620 min read
When AI Training Isn't Training: The Gap Between What Companies Buy and What Article 4 Requires

Listen to this article

0:000:00

TwinLadder Weekly

Issue #26 | March 2026


Editor's Note

I sat through a corporate AI training session last month. Four hours. A well-known European provider, a room of forty professionals, a polished slide deck. By hour two, I was watching participants check their phones under the table. By hour three, the trainer was demonstrating how to write a prompt for ChatGPT — the same demonstration I have now seen at seven different "AI literacy" events.

When it ended, every participant received a certificate. Article 4 compliance, checked. AI literacy, documented. The company's legal department could file the attendance records and move on.

But here is what troubled me. Not one participant left that room more competent than when they entered. They learned where to click. They did not learn how to think. And the gap between those two things — between tool proficiency and genuine competence — is the gap that Article 4 was written to close. The training market, with rare exceptions, is making it wider.


When AI Training Isn't Training

The Growing Gap Between What Companies Buy and What Article 4 Requires

Alex Blumentals, with technical analysis by Edgars Rozentals

The European corporate AI training market reached an estimated EUR 2.4 billion in 2025, according to IDC's European Digital Skills Tracker. That number will exceed EUR 4 billion in 2026, driven almost entirely by Article 4 compliance demand. [HIGH CONFIDENCE]

The question is not whether companies are spending. They are spending aggressively. The question is whether what they are buying produces the outcome the regulation intends.

The Checkbox Problem

Let me describe a pattern I have now observed across fourteen corporate training sessions in six European countries since September 2025. The typical programme runs between four and eight hours. It covers: what AI is (definitions, history), how LLMs work (simplified), prompt engineering basics (write clear instructions, provide context), and tool-specific training (here is how to use our chosen platform).

The participants leave with a certificate and, in better programmes, a prompt template library. The company files the certificate as Article 4 documentation. Everyone moves on.

What is missing? Everything that matters.

Edgars Rozentals, who designs technical AI training for Baltic and Nordic organisations, breaks it down: "Most corporate AI training teaches the equivalent of how to turn on a car and press the accelerator. It does not teach you to drive. You leave the session able to generate text, summarise documents, and write emails. You do not leave understanding why the LLM confidently cited a regulation that does not exist, or how to structure a verification workflow, or what happens when your prompt inadvertently exposes confidential client data to a third-party API."

What Companies Buy What Article 4 Requires
Tool demonstrations (which buttons to click) Understanding of AI system capabilities and limitations
Prompt engineering templates Ability to critically assess AI outputs
4–8 hour certificate programmes "Sufficient level of AI literacy" proportionate to role and risk
One-time training events Ongoing competence appropriate to evolving technology
Generic content, same for all roles Training "taking into account their technical knowledge, experience, education"

Read the right-hand column carefully. Article 4 does not require that staff can use AI tools. It requires that they understand them — their capabilities, their limitations, and the risks they present in the context of the deployer's specific use case. That is a fundamentally different educational objective.

The Market's Response: Volume Over Depth

The training market responded to Article 4 with predictable efficiency. In 2025, at least 340 new AI literacy certification programmes launched across Europe, according to the European Digital Skills Foundation's registry. [MODERATE CONFIDENCE]

Country New AI Training Programmes (2025) Average Duration Avg. Price Per Participant
Germany 87 6.2 hours EUR 420
France 64 5.8 hours EUR 380
Netherlands 43 7.1 hours EUR 510
Nordics (combined) 52 6.5 hours EUR 460
UK 58 5.4 hours EUR 350
Baltics (combined) 14 5.9 hours EUR 280
Other EU 22 5.7 hours EUR 340

Fourteen programmes in the Baltics. For three countries with a combined working population of approximately 3.2 million, of whom perhaps 400,000 interact with AI systems regularly. The supply-demand mismatch is obvious, but the quality problem is worse than the quantity problem.

I reviewed the published curricula of thirty-one European AI literacy programmes — the ones that make their syllabi publicly available. Twenty-seven of thirty-one allocated less than 10% of total training time to "limitations, risks, and failure modes." Twenty-three of thirty-one included no hands-on verification exercise where participants check AI output against a known-correct source. Nineteen of thirty-one used the same curriculum regardless of the participant's professional domain.

The industry is selling compliance certificates. It is not building competence.

What Real AI Competence Looks Like

Edgars Rozentals draws an important distinction: "There is a difference between AI awareness, AI proficiency, and AI competence. Awareness means you know AI exists and roughly what it does. Proficiency means you can operate AI tools effectively. Competence means you understand the technology well enough to know when it is wrong, why it is wrong, and what to do about it. Most training stops at proficiency. Article 4, if you read it seriously, requires competence."

He is right, and the distinction has concrete implications. Consider a lawyer using an AI research tool. Proficiency means she can formulate effective queries and extract relevant results. Competence means she understands that the LLM processes language statistically rather than semantically — that it generates probable next tokens, not verified legal conclusions — and adjusts her verification behaviour accordingly.

The difference is not academic. It determines whether she catches the hallucinated case citation that looks plausible but does not exist. Stanford's REG Lab data shows this matters: 17–34% of legal AI outputs contain hallucinations. Proficient users miss them. Competent users catch them.

The Five Markers of Genuine AI Training

From observing what works — and what does not — across European organisations, I have identified five characteristics that distinguish competence-building programmes from checkbox exercises:

1. Domain-specific failure cases. Generic AI training uses generic examples. Effective training uses failures from the participant's own professional domain. A lawyer needs to see a hallucinated case citation. An HR manager needs to see a biased shortlisting output. A finance professional needs to see a confidently wrong calculation. If the training does not include domain-specific failure scenarios, it is not building the pattern recognition that prevents real-world errors.

2. Hands-on verification exercises. Participants should check AI output against known-correct sources during the training itself — not as homework, not as a theoretical concept. Verification is a skill that requires practice. The Mannheimer Swartling "analogue days" we reported in Issue #25 work precisely because they make verification a regular, assessed practice rather than an abstract principle.

3. Structured understanding of how the technology works. Not computer science depth, but enough to understand why LLMs hallucinate, how context windows affect output quality, and why the same prompt produces different results on different days. Edgars Rozentals is blunt: "If your AI training does not explain that an LLM is a statistical prediction engine with no understanding of truth, you have not trained anyone. You have given them a false mental model that will fail exactly when it matters most."

4. Role-differentiated curricula. A board member's AI literacy needs are different from a junior associate's. A compliance officer needs different knowledge than a marketing manager. Article 4 explicitly requires training "taking into account" the individual's role and technical background. One-size-fits-all programmes are not just ineffective — they may not satisfy the regulation's proportionality requirement.

5. Ongoing assessment, not one-time certification. AI tools change quarterly. Model capabilities shift. New failure modes emerge. A certificate from January is partially obsolete by June. Effective programmes build in quarterly refreshers and periodic competence assessments. The regulation says "sufficient" — a standard that moves with the technology.

The Enforcement Question

The European Commission's AI Office has been deliberately vague about Article 4 enforcement mechanisms, and this vagueness has given organisations permission to treat compliance as a checkbox exercise. But the trajectory is clear.

In November 2025, the AI Office published interpretive guidance stating that AI literacy must enable "critical assessment of AI-generated outputs." In December, the German Federal Commissioner for Data Protection (BfDI) issued a position paper linking AI literacy to GDPR accountability under Article 5(2). In January 2026, the Dutch Authority for Consumers and Markets (ACM) published a market study identifying "inadequate AI training" as a consumer protection risk in professional services.

The enforcement net is tightening. Not through dramatic fines — not yet — but through a convergence of existing regulatory frameworks (data protection, consumer protection, professional regulation) that collectively create accountability for organisations whose AI training does not match the complexity of their AI use.

Enforcement Signal Jurisdiction Date Implication
AI Office interpretive guidance: "critical assessment" standard EU November 2025 Checkbox training may not meet Article 4
BfDI position paper: AI literacy linked to GDPR accountability Germany December 2025 Data protection authorities have enforcement mechanism
ACM market study: inadequate AI training as consumer risk Netherlands January 2026 Consumer protection regulators entering the space
SRA thematic review: AI competence in regulated firms UK February 2026 Professional regulators assessing actual competence
Latvian CDPC guidance: AI literacy in professional services Latvia Expected Q2 2026 Baltic enforcement framework developing

What This Means for Your Organisation

If your organisation has completed AI training and filed the certificates, you have done the minimum. You have not necessarily done enough.

The question regulators will increasingly ask is not "did your staff attend AI training?" but "can your staff demonstrate AI competence appropriate to their role?" The shift from attendance to assessment, from certificates to capability, is where the real compliance obligation lies.

This is not a vendor pitch. We build training programmes, and I am telling you directly: most of what the market sells — including much of what our competitors sell — does not meet the standard that Article 4 contemplates. The companies that will be best positioned when enforcement matures are the ones investing in deep, role-specific, assessment-driven training now, before the regulatory expectations crystallise into checklists.


The Competence Question

Your firm completed Article 4 compliance training in Q4 2025. Everyone attended. Everyone received certificates. The legal department filed the documentation.

Six months later, a client asks your junior associate to review an AI-generated regulatory analysis of their supply chain obligations under the EU's Corporate Sustainability Due Diligence Directive. The associate uses your firm's AI tool. It produces a confident, detailed analysis. The associate reviews it, confirms it looks right, and sends it to the client.

The analysis contains two errors: a mischaracterised threshold provision and a cited implementing regulation that entered force three months after the analysis date the AI assumed. Your associate did not catch either error. Your training programme never taught her how to verify regulatory timelines against primary sources — it taught her how to write prompts.

When the client discovers the errors, they will not ask whether your associate attended AI training. They will ask whether she was competent to deliver the work.


What To Do

  1. Audit your current AI training against the five markers above. Does it include domain-specific failure cases? Hands-on verification? Role differentiation? Ongoing assessment? If not, you have a programme that satisfies attendance records but may not satisfy Article 4's "sufficient" standard.

  2. Request your training provider's hallucination detection rate data. Ask what percentage of participants can identify AI-generated errors in domain-specific scenarios after completing the programme. If they cannot provide this data, they are not measuring what matters.

  3. Build verification practice into daily workflows, not just training days. Article 4 compliance is not an event — it is an ongoing capacity. Encourage teams to document one AI verification per week: what they checked, how they checked it, what they found. This creates both competence and compliance evidence.

  4. Differentiate training by role and risk level. A board member's AI literacy needs are different from a line manager's. Map your training investment to the risk each role carries. Article 4 explicitly requires proportionality.


Quick Reads

  • IDC European Digital Skills Tracker 2025 estimates the EU AI training market at EUR 2.4 billion, with 68% of spend going to programmes under 8 hours duration. The correlation between brevity and compliance-driven demand is not coincidental.

  • European Commission AI Office Article 4 Guidance — the November 2025 interpretive note uses "critical assessment" language that sets a higher bar than most training programmes target. If you have not read this guidance, you may be training to the wrong standard.

  • BfDI Position Paper on AI and GDPR Accountability links AI literacy directly to GDPR's accountability principle. German-operating firms should read this alongside their Article 4 compliance plans — the enforcement mechanism may arrive through data protection, not AI regulation.

  • ACM Market Study on AI in Professional Services identifies consumer protection risks from inadequate professional AI training. The Dutch approach suggests that enforcement will come from multiple directions simultaneously.


One Question

If a regulator asked your staff — not your compliance team, but the people who use AI tools every day — to explain how an LLM generates its outputs and why that matters for their specific work, how many could answer? And what does that gap tell you about the difference between the training you bought and the competence you need?


TwinLadder Weekly | Issue #26 | March 2026

Helping professionals build AI capability through honest education.