TwinLadder logoTwinLadder
TwinLadder
TwinLadder logoTwinLadder
Back to Insights

EU AI Act

The Compliance Floor at 52: What Article 4 Actually Requires

Score 52 out of 100 on the Twin Ladder Assessment. Below that line, an organisation cannot credibly demonstrate Article 4 compliance. Here is how we derived it, why it is not 50 or 60, and what it means for your regulatory exposure.

March 8, 2026Alex Blumentals, Founder & CEO8 min read
The Compliance Floor at 52: What Article 4 Actually Requires

Listen to this article

0:000:00

The Compliance Floor at 52: What Article 4 Actually Requires

Everyone wants a number. Regulators will not give you one. So we derived it ourselves -- line by line, phrase by phrase, from the text of Article 4. The number is 52.


I have spent the past six months watching organisations treat Article 4 compliance like a procurement exercise. Buy a training programme. Run the sessions. Collect the certificates. Move on.

They are making the same mistake they made with GDPR. Treating a structural obligation as a checkbox exercise. And just as GDPR enforcement eventually caught the organisations that had privacy policies but no privacy practices, Article 4 enforcement will catch the organisations that have training receipts but no demonstrable competence.

The question these organisations are not asking -- the question that actually matters -- is: what does "sufficient" look like, expressed as a measurable standard?

We built the Twin Ladder Assessment Maturity Model to answer that question. After mapping every operative phrase in Article 4 against our six assessment pillars, the compliance floor falls at approximately 52 out of 100. Not 50. Not 60. Here is why.


How Article 4 Maps to Six Pillars

Article 4 is a single sentence containing seven distinct operative elements. We mapped each element to the six pillars of the Twin Ladder Assessment: Awareness, Policy & Data Protection, Training, Tools, Evidence, and Governance.

The mapping is not theoretical. Each phrase in Article 4 creates a specific, identifiable obligation that corresponds to observable indicators in the maturity model.

Article 4 Element Primary Pillar Secondary Pillar(s)
"Providers and deployers of AI systems" Tools Governance
"shall take measures" Governance Evidence
"to their best extent" Evidence Policy, Governance
"a sufficient level of AI literacy" Awareness Training
"staff and other persons" Training Policy, Tools
"technical knowledge, experience..." Training Awareness
"context of use + affected persons" Awareness Policy, Tools, Governance

Every pillar is engaged by at least two operative elements. None is redundant. The heaviest burden falls on Awareness and Training (two primary mappings each), with Evidence and Governance appearing as enabling factors throughout.


The Minimum Threshold for Each Pillar

Each pillar has a weight and a minimum score that an organisation must reach to satisfy the Article 4 element it maps to. These minimums are not arbitrary. They represent the lowest maturity level at which the corresponding Article 4 obligation can credibly be met.

Pillar Weight Minimum Score What the Minimum Looks Like
Awareness 0.15 ~50 At least 50% of AI-interacting roles received structured awareness communication. Staff can name the AI tools they use and articulate key risks.
Policy & Data Protection 0.20 ~50 Formal AI use policy exists and is enforced. Acceptable and prohibited uses defined. GDPR data classification started for AI tools.
Training 0.20 ~50 Structured training programme with role-specific content delivered to key teams. Not yet comprehensive, but differentiated.
Tools 0.15 ~50 Complete AI systems inventory exists. Staff use designated, approved tools. Human review required for consequential decisions.
Evidence 0.15 ~55 Training completions documented and centralised. Needs assessment on record. Basic evidence portfolio assembled for audit readiness.
Governance 0.15 ~55 Named responsible person designated for AI governance. Periodic review cycle scheduled. Ethics considerations documented.

The weighted average of these minimums: approximately 52.


Why 52 and Not 50

At 50, an organisation sits at the top of the Developing stage -- the boundary between "we are aware of what we should do" and "we are doing it." The distinction matters because of three words in Article 4: "to their best extent."

Those three words create an evidentiary burden. An organisation must demonstrate not just that it has a programme, but that the programme represents a genuine, proportionate effort. At score 50, an organisation can have awareness, initial structures, and a drafted policy -- but typically lacks the documented evidence to prove its effort is genuine.

The Commission's own Q&A on AI literacy, published in May 2025, clarified this explicitly: merely directing staff to user manuals is "generally not considered sufficient." Generic, undifferentiated training fails the standard. An unenforced draft policy is not a "measure."

At 50, you can still have all of those deficiencies. At 52, the Evidence and Governance pillars have crossed into Implementing territory -- which means documented training records, a named governance owner, and periodic reviews. These are the minimum artefacts that allow an organisation to answer the question every regulator will eventually ask: show me what you did.


Why 52 and Not 60

At 60, an organisation has a strong, comprehensive programme. Role-specific training reaches most staff. Evidence is well-organised. Governance is active and informed. This is clearly compliant.

But requiring 60 as the floor would mean that an organisation with a genuine, structured programme -- training delivered to 80% of relevant staff, not yet 100%; contractor AI literacy clauses drafted but not yet in all vendor agreements -- would be classified as non-compliant.

Article 4 uses "sufficient level," not "comprehensive level." It requires measures taken "to their best extent," not "to perfection." Requiring 60 would contradict the proportionality principle embedded in the provision itself.

The 52 threshold represents the point at which an organisation has moved from awareness to action, from plans to documented measures, from informal responsibility to named accountability. It is the lowest score at which a regulator would find evidence of genuine effort. Below 52, they would not.


The GDPR Intersection You Cannot Ignore

Article 4 does not exist in a regulatory vacuum. Most organisations deploying AI systems process personal data through those systems. This triggers GDPR obligations that compound with Article 4 requirements.

The Policy & Data Protection pillar includes data protection indicators for precisely this reason. An AI use policy that ignores data classification is not just an Article 4 gap -- it is a GDPR gap.

The key intersections:

GDPR Provision What It Means for AI Literacy Risk Level
Article 5(1)(f) -- Security Staff must understand data security implications of AI tool use Medium
Article 22 -- Automated decisions Staff operating AI that makes decisions about individuals must understand data subject rights High
Article 25 -- Privacy by design AI use policy must address data minimisation; staff must know what data is permissible per tool Medium
Article 32 -- Security measures Shadow AI (unapproved tools) is both a security risk and a literacy gap indicator Medium
Article 35 -- DPIA AI deployments likely require data protection impact assessments conducted by AI-literate reviewers High

An employee who pastes a full client file into ChatGPT violates Article 25 while simultaneously demonstrating the Article 4 literacy gap. A DPIA conducted by staff who lack AI literacy is a deficient DPIA. The failures compound.


Financial Exposure Below the Floor

Let me be precise about what scoring below 52 costs.

AI Act exposure (Article 99(4)): Up to EUR 15 million or 3% of worldwide annual turnover, whichever is higher.

GDPR exposure (Article 83(5)): Up to EUR 20 million or 4% of worldwide annual turnover, whichever is higher.

For an organisation using AI tools that process personal data -- which is most organisations -- these exposures overlap. Not perfectly, because GDPR penalties require a distinct data protection violation, not merely an AI literacy failure. But the Venn diagram is large. An AI literacy gap that leads to improper personal data processing creates dual liability.

Organisation Size (Annual Turnover) AI Act Maximum GDPR Maximum Combined Exposure (20% Overlap Discount)
EUR 50M EUR 15M EUR 20M ~EUR 28M
EUR 200M EUR 15M EUR 20M ~EUR 28M
EUR 500M EUR 15M EUR 20M ~EUR 28M
EUR 1B EUR 30M (3%) EUR 40M (4%) ~EUR 56M

The 20% overlap discount reflects the practical reality that regulators do not typically impose maximum penalties under both regimes for the same underlying conduct. But the exposure is not theoretical. DLA Piper's 2025 GDPR Fines Survey documented EUR 2.1 billion in cumulative GDPR fines since enforcement began. AI Act enforcement, which started in February 2025 for Article 4, will follow the same trajectory.


Three Things to Do if You Score Below 52

Strip away the complexity and three actions move you from non-compliant to defensible.

1. Build the inventory. You cannot demonstrate literacy for AI systems you do not know exist. Map every AI tool in use across the organisation -- including the ones embedded in existing software that nobody thinks of as "AI." This addresses the Tools pillar and takes days, not months.

2. Name the owner. Article 4 requires "measures," and measures require someone accountable for taking them. Designate a responsible person for AI governance. Not a committee. A person with authority, budget, and a reporting line. This addresses the Governance pillar and takes a decision, not a project.

3. Document what you have already done. Most organisations have done more than they realise -- awareness communications, informal training, tool evaluations. Collect this evidence into a structured portfolio. The Evidence pillar does not require perfection. It requires proof that you tried. Assemble what exists, identify the gaps, and create a plan to close them.

These three actions alone will not reach 52. But they establish the foundation -- the documented, governed, evidenced foundation -- that makes reaching 52 a matter of execution rather than transformation.


The Path from 52 to 75

Score 52 is the floor. It means "probably not penalised." It does not mean competent.

The distance from 52 to 75 -- from "barely compliant" to "confidently implementing" -- is where the real value lies. Organisations at 75 have role-differentiated training reaching all AI-interacting staff, including contractors. They have a living AI policy that is reviewed and updated as tools evolve. They have governance structures that catch problems before regulators do.

More importantly, they have something the floor does not measure: the capacity to adapt. New AI tools will emerge. Regulations will evolve. The Commission will publish additional guidance. Organisations at 52 will scramble to respond. Organisations at 75 will absorb the change because their structures already account for it.

Compliance is the floor. Competence is the mission. We built the Twin Ladder Assessment to measure both. The floor tells you where the penalties start. Everything above it tells you whether your people will still have the expertise to know when the AI is wrong.

Take the free self-assessment to see where your organisation stands. The Twin Ladder Competence Framework is published under CC BY-SA 4.0 -- free to adopt, adapt, and redistribute.


Sources

  1. Regulation (EU) 2024/1689 -- EU Artificial Intelligence Act, Article 4 (AI Literacy). Full text of the provision establishing AI literacy obligations for providers and deployers. EUR-Lex

  2. European Commission: AI Literacy -- Questions & Answers (May 2025). Clarified that directing staff to user manuals is "generally not considered sufficient" and that the obligation applies with no size threshold. Digital Strategy

  3. European Commission: Living Repository of AI Literacy Practices (2025). Collection of 40+ AI literacy initiatives from AI Pact pledgers, providing benchmarks for "best extent" compliance. Digital Strategy

  4. AI Act Service Desk -- Article 4: AI Literacy. Operational guidance from the European AI Office on practical implementation. AI Act Service Desk

  5. Regulation (EU) 2016/679 -- General Data Protection Regulation (GDPR), Articles 5(1)(f), 22, 25, 32, 35. Data protection provisions intersecting with AI literacy obligations.

  6. Article 99, Regulation (EU) 2024/1689 -- Penalty framework establishing up to EUR 15M or 3% of worldwide annual turnover for Article 4 violations. Commentary

  7. DLA Piper: GDPR Fines and Data Breach Survey, 7th Edition (January 2025). Documented EUR 2.1 billion in cumulative GDPR fines since enforcement began. DLA Piper

  8. Twin Ladder Assessment Maturity Model v1.0 -- Six-pillar rubric for AI literacy and competence assessment. CC BY-SA 4.0. TwinLadder Research

  9. EU AI Act Article 4 -- Twin Ladder Assessment Maturity Model Mapping. Line-by-line mapping of Article 4 operative phrases to assessment pillars. Internal decision record, March 2026.