TWINLADDER
TwinLadder
TWINLADDER

Twin Ladder Casebook

EU AI Act Article 4: The Regulation That Asks Whether Your People Can Handle What You Have Given Them

February 10, 2026|regulator guidance

Article 4 of the EU AI Act is the most consequential provision that nobody prepared for. It requires every organization deploying AI to prove its people have sufficient literacy — and it arrived at the precise moment when the pathways through which professionals build that literacy are disappearing. This is the definitive reference on what Article 4 demands, what enforcement looks like, and why compliance is merely the floor.

TwinLadder

Listen to this article

0:000:00

EU AI Act Article 4: The Regulation That Asks Whether Your People Can Handle What You Have Given Them

A TwinLadder Research Capstone | Alex Blumentals, Liga, and Edgars | February 2026


I. Why This Regulation Is Different

Alex Blumentals

In twenty years of organizational change work, I have never encountered a regulation that so precisely targets the gap between what organizations believe they know and what they can actually do.

Most regulations tell organizations what they must not do. Do not process personal data without a legal basis. Do not deploy discriminatory hiring algorithms. Do not market financial products with misleading claims. These are prohibitions -- lines drawn in sand, boundaries beyond which penalties await. They require organizations to stop doing something, or to prove they are not doing it. Compliance teams understand this architecture. They are good at it. They build policies, run audits, produce documentation, and demonstrate that the prohibited thing is not happening.

Article 4 of the EU AI Act does something different. It does not tell organizations what they must not do. It tells them what their people must be. It says: before you hand someone an AI system, prove that they are competent to use it. Not that they have signed an acceptable use policy. Not that they have watched a forty-five-minute webinar. Prove that they understand what they are working with, at a level sufficient for the context in which they are using it.

This is, quietly, one of the most radical regulatory ideas of the past decade. And I say this as someone who has watched regulatory cycles come and go across European industries since the early 2000s. The General Data Protection Regulation was transformative, but its core demand was institutional: appoint a data protection officer, conduct impact assessments, maintain records of processing. These are organizational capabilities that can be built by specialists. Article 4's demand is different. It is personal. It applies to every individual who touches an AI system, and the organization must ensure that each of them is ready.

The instinct, when I present this to leadership teams across Europe, is to classify it as a training problem. Something for HR to handle. A line item in the learning and development budget. Schedule the e-learning module, collect the completion certificates, file them somewhere defensible, move on.

That instinct is wrong, and I want to explain why it is wrong, because the gap between "training delivered" and "competence achieved" is precisely the gap that Article 4 was written to close.

I have watched organizations deploy AI tools across entire departments with no more preparation than a vendor demonstration and a shared-drive link to the user manual. I have sat in board meetings where executives could not name the AI systems their own companies were using, let alone explain what those systems could or could not do. I have seen legal teams use AI-generated research without knowing what a hallucination is, finance teams accept AI-generated forecasts without understanding the model's training data, and HR departments screen candidates through algorithmic tools whose decision logic nobody in the building could articulate.

And I have seen what happens when those gaps produce consequences. Not in the abstract. In the specific, documented, named cases that we have been tracking in our Casebook series for the past year. When Klarna replaced 700 customer service agents with an AI assistant and then had to hire humans back because the AI optimized for speed while destroying trust. When an accounting firm's staff could not perform basic reconciliations after the AI vendor cancelled its contract, because nobody had maintained the manual skills. When a European insurer discovered it could not detect a fraud pattern because the junior adjusters who might have caught it had never learned what a legitimate invoice looks like -- they had only ever reviewed AI-generated recommendations.

These are not technology failures. They are competence failures. And they are precisely the failures that Article 4 was designed to prevent.

The regulation arrives at the precise moment when the thing it mandates -- competence -- is hardest to build. That timing is not a coincidence. It is the reason the regulation exists.


II. What Article 4 Actually Says

Liga

Let me be precise about the legal text, because precision matters and imprecision in this area will cost organizations money.

Article 4 of Regulation (EU) 2024/1689 -- the EU Artificial Intelligence Act -- states:

Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training, the context in which the AI systems are to be used, and considering the persons or groups of persons on whom the AI systems are to be used.

This is a single sentence. Twenty-five years in corporate law have taught me that single-sentence obligations are the most dangerous kind. They are the provisions that organizations underestimate, that compliance teams skim over, and that regulators use as the foundation for enforcement actions that nobody saw coming. GDPR Article 5 -- the principles of processing -- is one sentence per principle. It has generated billions in fines.

Let me unpack each operative phrase, because each one carries weight.

"Providers and deployers" -- This captures both sides of the AI supply chain. A provider is an organization that develops or places an AI system on the market. A deployer is any organization that uses an AI system "under its authority." A law firm using an AI-powered research tool is a deployer. A bank using AI credit scoring is a deployer. A marketing agency generating content with AI is a deployer. A logistics company running AI route optimization is a deployer. If your organization uses AI in any capacity, you are a deployer, and Article 4 applies to you.

This is not limited to high-risk AI systems. It is not limited to systems classified under Annexes I or III of the Act. Article 4 applies to all AI systems, including the general-purpose tools that most organizations consider too routine to regulate. Your email platform's AI features. Your CRM's predictive analytics. Your HR system's candidate matching. All of it.

"Shall take measures" -- This is an obligation, not a recommendation. The word "shall" in EU legislative drafting is mandatory. There is no opt-out, no de minimis threshold, and no exemption for organizations below a certain size. The European Commission's May 2025 Q&A confirmed this explicitly: the obligation applies from the moment an organization uses any AI system.

"To their best extent" -- This phrase introduces proportionality, not permission to ignore. An SME with twenty employees is not held to the same standard as a multinational with fifty thousand. But "best extent" means genuine effort, documented and demonstrable. The Commission's Q&A is clear: simply directing staff to read an AI system's user manual is generally not considered sufficient. Mere reliance on instructions for use is not compliance.

"Sufficient level of AI literacy" -- "Sufficient" is the operative word, and it is deliberately context-dependent. The European Commission has clarified that staff should understand what AI is, how it works, which AI systems are in use within their organization, and the associated opportunities and risks. But the depth required varies by role. A data scientist needs different literacy than a procurement manager, who needs different literacy than a receptionist. The standard is role-appropriate, context-sensitive, and proportional to the consequences of the AI's use.

"Staff and other persons dealing with the operation and use" -- The scope extends beyond employees. It covers contractors, consultants, temporary workers, outsourced service providers -- anyone who interacts with AI systems on the organization's behalf. This means that an organization cannot satisfy Article 4 by training its own employees while leaving its outsourced operations unaddressed.

"Taking into account... the context in which the AI systems are to be used, and considering the persons or groups of persons on whom the AI systems are to be used" -- This is the provision that makes one-size-fits-all training programmes legally insufficient. The law explicitly requires that literacy efforts reflect the specific context of use and the people affected by the AI system's outputs. A customer-facing AI chatbot requires different literacy from an internal analytics tool. An AI system that makes decisions about employment requires different literacy from one that optimizes warehouse logistics. The training must match the stakes.

The interaction with other provisions of the AI Act deserves attention. Article 4 is not a standalone obligation. It sits within a regulatory architecture where Article 9 requires risk management systems, Article 14 requires human oversight for high-risk AI systems, and Article 26 places specific obligations on deployers of high-risk systems. Article 4's literacy requirement is the foundation on which all of these other obligations rest. You cannot have meaningful human oversight if the humans providing the oversight do not understand what they are overseeing.


III. Who This Applies To -- The Uncomfortable Truth

Alex Blumentals

I have watched C-suites assume Article 4 is an IT problem. It is not.

I have watched compliance officers assume it is a training catalogue problem -- find a vendor, schedule the sessions, track completions. It is not that either.

Article 4 applies to every person in the organization who interacts with an AI system in any capacity. In 2026, after two years of accelerating AI adoption across every sector and function, that means effectively everyone.

Consider what AI deployment looks like inside a typical mid-sized European company -- not a technology firm, not a Silicon Valley startup, but an ordinary company with two hundred employees across several departments.

Human resources uses an applicant tracking system with AI-powered resume screening. It uses AI for employee sentiment analysis and workforce planning. Every HR professional who reviews a candidate recommendation, accepts an AI-generated shortlist, or acts on an AI-derived insight about employee satisfaction needs to understand what the system is doing and what it cannot do. They need to know what bias looks like in algorithmic screening. They need to understand why an AI-generated ranking is not the same as a human judgment.

Finance and accounting uses AI for anomaly detection in transactions, automated reporting, cash flow forecasting, and regulatory compliance monitoring. These professionals need to understand the reliability limitations of AI predictions. They need to grasp what it means when a model was trained on historical data that may not reflect current conditions. They need to know when to override an automated recommendation and when to trust it.

Marketing and sales uses AI for content generation, customer segmentation, predictive analytics, and personalization engines. These teams need to understand what hallucination means in the context of AI-generated marketing claims. They need to know that an AI-generated product description may contain fabricated specifications. They need to understand the ethical and legal dimensions of AI-driven customer targeting.

Operations and supply chain uses AI for demand forecasting, inventory optimization, quality control, and logistics. These teams need to understand how models make predictions, what happens when real-world conditions diverge from training data, and when to escalate rather than accept an automated decision.

Legal and compliance carries a double burden. They use AI for contract review, legal research, regulatory monitoring -- but they also bear responsibility for ensuring the organization's broader Article 4 compliance. I have spoken with general counsel who were startled to learn that Article 4 applies to their paralegals' use of AI research tools. They assumed the obligation was about "AI systems" in the formal, high-risk sense -- not about ChatGPT.

Management and leadership must understand AI at a strategic level: what AI investments to make, what risks they create, how to govern AI deployment across the organization. A board that cannot ask intelligent questions about its company's AI use is a board that cannot fulfill its oversight function. Article 4 does not exempt the C-suite. If anything, its context-dependent standard implies that those with the most consequential decision-making authority need the deepest understanding.

This is the uncomfortable truth that I have delivered to every leadership team I have worked with in the past year: Article 4 is not a programme you implement. It is a capability you build. And the organizations that treat it as a box-checking exercise will discover -- as organizations always discover with European regulation -- that the box was much larger than they assumed.


IV. The Enforcement Landscape

Liga

Let me contextualise the enforcement risk, because I find that organizations respond better to regulatory obligations when they understand the enforcement trajectory -- not just the theoretical penalties, but the practical pattern of how European regulators actually behave.

Article 4 entered into force on 2 February 2025. It was among the first provisions of the AI Act to become binding, alongside the prohibitions on unacceptable-risk AI practices. This was a deliberate legislative choice: the European Parliament and Council recognized that AI literacy is foundational. Organizations cannot properly implement the Act's more complex requirements -- high-risk system conformity assessments, technical documentation, quality management systems -- without first ensuring that their personnel understand AI at a sufficient level.

The supervision and enforcement rules apply from 2 August 2025 onward. As of that date, Member States were required to have designated their national competent authorities, consisting of at least one market surveillance authority and one notifying authority. The AI Office became fully operational on the same date, alongside the AI Board -- a coordination body of Member State representatives tasked with ensuring consistent application across jurisdictions.

The penalty framework is severe. The AI Act establishes tiered administrative fines: up to EUR 35 million or 7% of worldwide annual turnover for prohibited AI practices; up to EUR 15 million or 3% for violations of other obligations; and up to EUR 7.5 million or 1% for supplying incorrect information to authorities. Article 4 violations fall within the general obligations tier. For a company with EUR 500 million in annual revenue, the maximum exposure for an AI literacy failure alone is EUR 15 million.

But the financial penalties, while significant, are not the principal enforcement risk. The compounding risk is.

An organization that deploys a high-risk AI system without ensuring operator literacy may face penalties both for the high-risk system non-compliance and for the underlying literacy failure. Article 4 is a foundational obligation -- its absence aggravates the assessment of every other violation that follows from it. If your HR department deploys an AI screening tool without understanding its bias risks, you face potential penalties for the high-risk deployment (Article 26), the absence of adequate human oversight (Article 14), and the failure to ensure sufficient AI literacy (Article 4). The penalties stack.

The GDPR enforcement pattern is instructive. When the General Data Protection Regulation took effect in May 2018, enforcement was glacial. Only sixteen fines were issued in 2018, and only one exceeded EUR 100,000. Some Member States -- Italy explicitly -- announced informal grace periods. Ireland received 1,928 complaints in the first six months and issued zero penalties.

Then the acceleration began. By 2020, authorities were issuing approximately 300 fines per year. By 2024, cumulative GDPR fines had reached EUR 5.88 billion, with EUR 1.2 billion imposed in 2024 alone. The fines against LinkedIn (EUR 310 million), Clearview AI (EUR 30.5 million), and OpenAI (EUR 15 million) demonstrated that enforcement had moved beyond data breach notifications into the structural governance of AI systems.

The AI Act will follow the same trajectory. The early months will be quiet. Organizations will mistake this silence for safety. And then the enforcement wave will arrive, targeting the organizations that treated the silence as permission to wait.

Beyond formal penalties, consider the reputational and operational risks. An organization whose personnel misuse AI because they were never trained -- a lawyer who submits hallucinated case citations, a credit analyst who accepts a biased AI recommendation without scrutiny, a welfare system that flags vulnerable claimants for fraud because nobody understood what the algorithm was doing -- faces not only regulatory exposure but civil liability, client claims, and public trust damage that no fine schedule can capture.


V. The Competence Paradox

Alex Blumentals

This is the section I have been building toward. It is the reason TwinLadder exists. It is the thesis that informs everything we do.

Article 4 mandates AI literacy. That is its legal function. But Article 4 arrives at a moment when the mechanisms through which professionals have traditionally built literacy -- in any domain, not just AI -- are being systematically dismantled by the very technology that the regulation seeks to govern.

Let me make this concrete.

A junior lawyer in a European firm in 2019 spent their first three years drafting contracts, reviewing documents, conducting legal research, and preparing memoranda. This work was tedious, repetitive, and -- critically -- educational. Every clause they drafted, every case they summarized, every inconsistency they caught built a layer of professional judgment. They learned what a well-constructed argument looks like by constructing bad ones and having them corrected. They learned what matters in a contract by reading hundreds of contracts and gradually developing the instinct to distinguish the important from the routine. This was the apprenticeship model. It was inefficient. It was also the only reliable mechanism for producing competent professionals.

A junior lawyer in the same firm in 2026 uses AI for most of these tasks. The AI drafts the contract. The AI summarizes the case law. The AI flags the inconsistencies. The junior lawyer reviews the output, confirms it, and moves on. Their throughput is extraordinary. Their billable hours are impressive. And they have never built the judgment that the reviewing partner assumes they possess, because the work that builds judgment has been absorbed by the machine.

This is the competence paradox. It operates in every profession, not just law.

A field experiment published in the Proceedings of the National Academy of Sciences in 2025 documented it with devastating precision. Researchers gave nearly one thousand students access to AI tutoring while practicing mathematics. The students with AI assistance solved 48% more problems correctly. Then the researchers removed the AI and administered an exam. The AI-assisted students scored 17% lower than those who had never had AI help at all. Forty-eight percent better at doing. Seventeen percent worse at understanding. The tool that made them look competent had prevented them from becoming competent.

I see this pattern everywhere. In the insurance sector, where entry-level claims processing has been automated to the point where junior adjusters never learn what a legitimate invoice looks like -- and then cannot detect fraud when it arrives in a pattern the AI was not trained to recognise. In accounting, where a firm discovered its staff could not perform manual reconciliations after the AI vendor cancelled its contract, because the skills had atrophied during years of automated processing. In financial services, where Klarna's decision to replace 700 human agents with AI produced faster resolution times and lower costs -- and then a reversal, because the AI could not exercise the judgment that the humans had taken for granted.

The data is accumulating. Entry-level job postings in the United States have declined approximately 35% since January 2023, with 66% of enterprises reducing entry-level hiring specifically because of AI capabilities. As growth strategist Joe Puglisi put it with characteristic bluntness: "There is no training going on at the lower level, and this is a disaster waiting to happen."

He is right. And the disaster has a specific shape. AI eliminates the entry-level tasks where professionals learn. It simultaneously automates the senior tasks where experienced professionals apply their judgment. The competence debt -- the gap between what an organization needs its people to know and what they actually know -- builds invisibly. Nobody notices because the AI is performing. The metrics are up. The throughput is impressive. The people look productive.

Then something goes wrong. The AI makes an error that requires human judgment to catch, and nobody in the room has the judgment to catch it. The vendor terminates the contract, and nobody knows how to do the work manually. A regulator asks what your people understand about the AI systems they operate, and nobody can answer.

This is what Article 4 is really about. Not training. Not literacy as a checkbox. Competence as an organizational capability -- the capacity to understand, question, verify, and override the AI systems that increasingly make decisions on the organization's behalf.

The regulation arrives at precisely the moment when the thing it mandates is hardest to build. And that timing, I have come to believe, is not a policy accident. It is the regulators' recognition that without a legal obligation, most organizations will not build this capability until the absence of it produces a crisis. The regulation is an attempt to force the investment before the crisis arrives.


VI. What "Sufficient" Actually Means in Practice

Edgars

The legal framework is clear. The practical question -- what does AI literacy actually look like when you try to implement it -- is where most organizations struggle.

I build assessment and training systems. I have spent the past year designing competence frameworks for organizations trying to comply with Article 4. Let me share what I have learned about what "sufficient" means when you move from legal text to operational reality.

One-size-fits-all does not work. The law says this explicitly -- "taking into account their technical knowledge, experience, education and training... the context in which the AI systems are to be used" -- but organizations keep trying to satisfy Article 4 with a single e-learning module pushed to all employees. The European Commission's May 2025 Q&A confirmed what practitioners already knew: merely referring staff to the AI system's instructions for use is generally not considered sufficient. Frequent and specific training is required, and the content must be evaluated on a case-by-case basis.

Competence mapping is the starting point. Before you can train anyone, you need to know what AI systems your organization uses, who interacts with them, and what the consequences are if those interactions go wrong. This is the AI systems inventory that most organizations have not conducted. I have worked with companies that discovered they were running thirty-seven AI-enabled tools across the organization -- and the IT department knew about twelve of them. Shadow AI is pervasive. You cannot ensure literacy for systems you do not know exist.

Role-based assessment is the architecture. Based on my experience building these systems, I think of AI literacy as operating at three levels, which map roughly to the Twin Ladder framework:

Level 0 -- AI Awareness is the floor that Article 4 mandates for everyone. What is AI? What is it doing in the tools I use daily? What can it do well? What can it get wrong? What are hallucinations, and why do they matter for my work? Every person in the organization who touches an AI system needs this -- and in 2026, that is nearly everyone. This is not a technology course. It is a professional literacy requirement, comparable to understanding that the spreadsheet's formula might contain an error.

Level 1 -- Professional AI Competence is what Article 4 implies for anyone whose role involves consequential AI-assisted decisions. HR professionals screening candidates, finance professionals relying on AI forecasts, legal professionals using AI research tools, operations managers acting on AI-generated demand predictions -- these roles require deeper understanding. They need to know how to verify AI outputs. They need to understand what the AI's confidence scores actually mean. They need to recognize the situations where the AI is likely to fail and human judgment must take over. They need domain-specific verification skills: a legal professional needs to know how to check whether a case citation exists; a financial analyst needs to know how to validate a forecast against fundamental assumptions.

Level 2 -- AI Governance is what organizations need at the leadership and oversight level. Board members, C-suite executives, compliance officers, and data protection officers need to understand AI at a strategic and governance level -- not how to prompt a model, but how to set policy for AI deployment, how to assess whether the organization's AI use is creating unacceptable risk, and how to ensure that the organization's AI governance framework is adequate.

Verification skills are the core competence. Across all roles, the single most important literacy skill is verification -- the ability to check whether an AI output is correct, complete, and appropriate for the context. This is where the cognitive paradox research is most instructive. Students who used AI as an answer machine lost the ability to verify answers independently. Professionals who use AI as an output machine without developing verification habits are building the same dependency. An AI literacy programme that teaches people how to use AI tools without teaching them how to check the AI's work has missed the point entirely.

Assessment must be practical, not theoretical. The Commission's Q&A states that Article 4 does not require formal measurement of employee knowledge. But organizations that want to demonstrate "best extent" compliance need some form of competence verification. What works, in my experience, is scenario-based assessment: present a professional with an AI output relevant to their role and ask them to evaluate it. Can the HR manager identify the bias signal in an AI-generated candidate ranking? Can the financial analyst explain why the AI's forecast diverged from the previous quarter's actuals? Can the legal professional spot the hallucinated citation in an AI-generated memorandum? These are practical skills that can be assessed without formal certification, and they produce far better evidence of literacy than a quiz on AI terminology.

Ongoing maintenance is non-negotiable. AI capabilities change. Organizational AI use evolves. New tools are adopted, old ones are updated, and the risk landscape shifts continuously. A training programme that was adequate in January may be inadequate by June. The Commission's living repository of AI literacy practices -- which now contains more than forty initiatives from companies and public sector bodies -- emphasises that AI literacy is an ongoing process, not a one-time event. Organizations need recurring assessment, updated content, and mechanisms to track whether their people's competence is keeping pace with the AI systems they operate.


VII. Compliance Architecture

Liga

Let me translate the practical requirements into a compliance architecture -- the documentation and processes that an organization needs to demonstrate Article 4 compliance when a regulator asks.

I draw on the GDPR parallel deliberately, because I have spent eight years advising organizations on data protection compliance, and the structural similarity is striking. In 2018, organizations that built GDPR compliance programmes early captured lasting advantage. Those that waited -- assuming enforcement would be slow, that their sector would be overlooked, that the regulation's vagueness gave them room to delay -- spent two to three times more on rushed, defensive compliance when the enforcement wave arrived. The same pattern will repeat with Article 4.

The AI Systems Inventory. Before anything else, the organization must know what AI systems it operates. This is the equivalent of GDPR's record of processing activities. It maps every AI system in the organization to the business function it serves, the personnel who interact with it, the data it processes, and the decisions it influences. Without this inventory, every subsequent compliance step rests on incomplete information.

Most organizations discover, when they conduct this inventory, that their AI footprint is far larger than they assumed. The marketing team's content generation tools. The finance team's automated reporting. The customer service chatbot. The predictive maintenance system in operations. The sentiment analysis tool in HR. These are all AI systems within the meaning of the Act, and Article 4 applies to each of them.

The Literacy Needs Assessment. This maps the inventory to the people. For each AI system, which roles interact with it, in what capacity, and with what level of consequence? The needs assessment produces a matrix: roles on one axis, AI systems on the other, required literacy level at each intersection. This document is the foundation for everything that follows -- training design, resource allocation, and compliance evidence.

Training Programmes. These must be calibrated to the needs assessment, providing role-appropriate content at each literacy level. The Commission's Q&A confirms that the content of training is not fixed and will vary based on the experience of the staff. What matters is that the training is genuine, structured, and documented -- not perfunctory.

The Commission has been clear that internal records of training activities are sufficient documentation. External certification is not required. But the records must demonstrate that the organization made a genuine effort to provide literacy appropriate to the context. Training records, attendance lists, content summaries, and competence verification results create the evidentiary foundation.

Competence Verification. While the Commission has clarified that formal measurement of employee knowledge is not mandated, organizations that want to demonstrate "to their best extent" compliance will be in a stronger position if they can show that their training actually achieved its objective. Scenario-based assessments, practical exercises, and role-specific competence checks provide stronger evidence than completion certificates alone.

Ongoing Review and Update. AI literacy is an ongoing obligation, not a one-time event. Organizations must update their training programmes as AI capabilities evolve, as new AI tools are adopted, and as regulatory guidance develops. An annual review cycle, at minimum, is prudent. Organizations operating in high-consequence domains -- financial services, healthcare, legal, human resources -- should consider more frequent reviews.

Documentation of Compliance Efforts. Everything must be documented. The needs assessment, the training design, the training delivery, the competence verification, the review cycles, the updates. "To their best extent" is a standard that can only be demonstrated with evidence. When a regulator asks -- and regulators will ask -- the organization's response must be a documented programme, not a verbal assurance.

The living repository that the Commission launched provides useful benchmarks. The first batch of practices was gathered from AI Pact pledgers between December 2024 and February 2025, and a second survey expanded the repository through mid-2025. Organizations can benchmark their own programmes against the more than forty initiatives now documented -- from e-learning platforms and in-person training to bootcamps and industry-academia collaborations.


VIII. Beyond Compliance -- The Competence Mission

Alex Blumentals

Compliance is the floor. Competence is the mission.

I have used that phrase so often in the past year that it has become something of a signature. But I keep using it because it captures something that I believe most organizations -- and most compliance frameworks -- systematically miss.

Article 4 asks organizations to ensure that their people have "sufficient AI literacy." That is its legal requirement. An organization that provides adequate training, documents its efforts, and maintains its programme over time will satisfy the regulation. It will be compliant. It will avoid penalties. And it will have gained precisely nothing beyond the avoidance of penalties.

The organizations that I admire -- and I have worked with enough of them over two decades to know the pattern -- are the ones that use regulatory obligations as a catalyst for something larger. The companies that treated GDPR not merely as a compliance burden but as an opportunity to rebuild their relationship with customer data. The firms that used Solvency II not merely as a capital requirement but as a reason to fundamentally improve their risk modelling. The organizations that understood that the regulation was pointing at something real -- a genuine deficit, a genuine risk -- and used the compliance investment to address the underlying problem rather than merely satisfying the legal standard.

Article 4 is pointing at something real. The deficit it identifies -- the gap between AI deployment and human competence -- is not a regulatory fiction. It is the most significant organizational risk of this decade. And the organizations that treat Article 4 as a catalyst for building genuine AI competence will emerge from this regulatory cycle with a competitive advantage that their checkbox-compliant peers will not be able to replicate.

What does this look like in practice? I think it looks like three things.

First, it looks like competence as a strategic investment, not a compliance cost. The evidence from early AI adopters is consistent: organizations with structured AI training programmes report higher productivity gains, lower error rates, and better return on AI technology investments than organizations where AI adoption is informal and unguided. McKinsey's research on AI implementation consistently shows that the primary determinant of AI ROI is not the quality of the technology but the readiness of the people who use it. Article 4 compliance, done well, pays for itself.

Second, it looks like closing the competence paradox deliberately. Organizations that understand the paradox -- that AI simultaneously creates the need for competence and destroys the pathways through which competence is built -- can design around it. This means creating structured opportunities for professionals to build judgment, not just use tools. It means preserving the apprenticeship functions that AI automates -- not out of nostalgia, but because those functions are the mechanisms through which the next generation of competent professionals is produced. It means being intentional about what remains human in an increasingly automated workflow.

Third, it looks like building an evidence portfolio -- not for the regulator, but for the organization. A competence framework that maps every role to its AI literacy requirements, verifies competence on an ongoing basis, and adapts as the technology evolves is not just a compliance document. It is an organizational capability map. It tells leadership where the competence gaps are, where the risks concentrate, and where the investment in human development will produce the greatest return.

This is the approach we have built at TwinLadder: Assess the current state. Learn through structured, role-appropriate programmes. Apply through practical, scenario-based exercises. Verify through competence assessment. And then repeat -- because the technology does not stand still, and neither can the people who use it.

The GDPR market offers a forward-looking mirror. The GDPR services market grew from an initial wave of compliance spending in 2018 to USD 2.75 billion by 2024, projected to reach USD 20.82 billion by 2033. The organizations that were prepared to deliver GDPR compliance services in 2018 defined the market for a decade. The AI literacy market will follow the same trajectory -- driven by the same dynamics of mandatory obligation, initial underestimation, enforcement acceleration, and eventual recognition that compliance is not a one-time event but a permanent organizational function.

Article 4 is already in force. The question is not whether to respond. The question is whether you respond as a compliance exercise or a competence mission. Whether you build the minimum programme that satisfies the legal standard, or the organizational capability that the legal standard is pointing toward.

I know which option I would choose. I know which option the organizations that thrive in the next decade will choose. And I know that the difference between the two -- the gap between compliance and competence -- is where the real value lies.


Key Takeaways

  • Article 4 of the EU AI Act entered into force on 2 February 2025. It requires every organization that deploys or uses AI systems to ensure that all personnel interacting with those systems have a sufficient level of AI literacy. This obligation is already binding.

  • It applies to everyone. Not just technology companies, not just high-risk AI operators. Every organization using AI in any capacity -- which in 2026 means effectively every organization in the EU -- is within scope.

  • The scope extends beyond employees to contractors, consultants, temporary workers, and anyone else who operates or uses AI systems on the organization's behalf.

  • One-size-fits-all training is legally insufficient. Article 4 explicitly requires that literacy measures account for the individual's role, the AI system's context, and the people affected by its outputs. The European Commission has confirmed that merely directing staff to user manuals is not sufficient.

  • The penalty framework is severe. Administrative fines of up to EUR 15 million or 3% of worldwide annual turnover for general obligation violations, with the potential for compounding penalties when literacy failures contribute to other non-compliance.

  • The GDPR enforcement pattern will repeat. Initial quiet does not mean permanent safety. GDPR fines were negligible in 2018; by 2024, cumulative fines had reached EUR 5.88 billion. Organizations that treat the AI Act's early enforcement period as permission to delay are making the same mistake that GDPR laggards made.

  • The competence paradox makes this harder than it appears. AI is simultaneously eliminating the entry-level tasks through which professionals have traditionally built judgment, and automating the senior tasks that require it. The competence debt accumulates invisibly until a crisis reveals it.

  • Compliance requires documented effort. An AI systems inventory, a literacy needs assessment, role-appropriate training programmes, competence verification, ongoing updates, and comprehensive documentation form the minimum compliance architecture.

  • The European Commission has published practical guidance. The AI Literacy Q&A (May 2025), the living repository of AI literacy practices, and the AI Act Service Desk provide actionable reference points.

  • Compliance is the floor. Competence is the mission. Organizations that treat Article 4 as a box-checking exercise will comply with the law but gain nothing beyond penalty avoidance. Organizations that use it as a catalyst for building genuine AI competence will comply and gain a sustainable advantage in the AI-transformed economy.