AI in HR: Article 4 Compliance for People Teams
Article 4 does not say "IT departments". It does not say "legal teams". It says anyone dealing with the operation and use of AI systems. In most organisations, that description fits HR more than any other function.
When the EU AI Act's Article 4 became enforceable on 2 February 2025, most compliance conversations centred on legal departments and IT governance. That focus missed the obvious: HR departments are among the most prolific deployers of AI systems in any modern organisation. They use AI to screen candidates, analyse employee performance, recommend learning paths, predict attrition, and automate workforce planning. Each of these applications triggers Article 4's literacy mandate -- and several fall squarely within the EU AI Act's high-risk classification.
If your people function has not yet been brought into Article 4 planning, you have a gap that grows wider with every AI-augmented decision your HR team makes.
HR as AI Deployer: The Scope Problem
Article 4 requires that "deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf." The text is technology-neutral and function-neutral. It applies wherever AI systems operate.
Consider the AI systems a typical mid-size European HR department now runs:
Recruitment and screening. Platforms like HireVue use AI to assess video interviews, scoring candidates on verbal and non-verbal cues. Harver (formerly Pymetrics) deploys neuroscience-based games with AI-driven matching algorithms. Eightfold AI uses deep learning to match candidate profiles against role requirements. Each of these is an AI system under the Act's definition, and the HR professionals configuring and interpreting them are deployers.
Performance analytics. Tools that aggregate productivity data, flag disengagement patterns, or recommend development actions deploy machine learning models that HR staff interact with daily.
Learning and development. AI-powered platforms that personalise training recommendations, assess skill gaps, or predict career trajectories all qualify. The irony is rich: the AI system recommending your training programme is itself subject to the literacy mandate.
Workforce planning. Predictive models for attrition, headcount forecasting, and internal mobility use AI in ways most HR professionals are aware of but few understand mechanically.
The HR function does not merely "use" these tools. It configures them, interprets their outputs, and acts on their recommendations in ways that directly affect people's employment and livelihoods.
Why HR Gets Special Regulatory Attention
The EU AI Act does not treat all AI systems equally. Annex III designates certain categories as high-risk, subjecting them to enhanced obligations including conformity assessments, human oversight requirements, and detailed documentation. Employment and workforce management AI appears explicitly in Annex III, Section 4:
AI systems intended to be used for recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates ... AI systems intended to be used to make decisions affecting terms of work-related relationships, promotion or termination of work-related contractual relationships.
This is not a marginal mention. Employment AI is among the Act's primary regulatory concerns, placed alongside biometric identification, critical infrastructure, and law enforcement. The practical implication: HR teams deploying recruitment AI face not only Article 4 literacy obligations but the full weight of the Act's high-risk framework.
Additionally, GDPR Article 22 already prohibits decisions based solely on automated processing that produce legal effects or similarly significant effects concerning an individual -- and employment decisions are among the clearest examples. An HR professional who cannot explain how an AI screening tool reached its recommendation is not merely non-compliant with Article 4; they may be enabling a GDPR violation.
What "Sufficient AI Literacy" Means for HR
Article 4's "sufficient level" standard is deliberately contextual. For HR professionals, sufficiency must account for the specific systems they deploy, the sensitivity of employment decisions, and the regulatory overlay of both the AI Act and GDPR. In practice, this translates to several concrete capabilities.
Understanding system mechanics at a functional level. An HR professional using Eightfold AI for candidate matching does not need to understand transformer architectures. They do need to understand that the system learns from historical hiring data, that this data may encode past biases, and that match scores are probabilistic rather than deterministic.
Recognising bias vectors. The European Commission's High-Level Expert Group on AI has consistently flagged recruitment AI as a primary bias concern. HR teams need to understand how training data, proxy variables, and feedback loops can introduce or amplify discrimination -- not as abstract concepts but as practical risks in the tools they use daily.
Evaluating outputs critically. When a screening tool recommends rejecting 200 of 300 applicants, an HR professional with sufficient literacy asks: What criteria drove the filtering? Were any protected characteristics correlated with rejection? Does the rejection rate vary across demographic groups? Without this capability, the tool's output becomes an unchallengeable black box.
Maintaining human oversight. The AI Act's high-risk framework requires meaningful human oversight, not rubber-stamping. For HR, this means the ability to override AI recommendations, the knowledge of when override is appropriate, and the documentation practices to demonstrate that human judgment was genuinely exercised.
Explaining decisions to affected individuals. GDPR's transparency requirements, combined with the AI Act's provisions, mean that candidates and employees have the right to understand how AI-influenced decisions were made. HR staff need to provide these explanations -- which requires understanding the systems well enough to explain them.
The Practical Gap: Where Most HR Teams Stand Today
Most HR departments adopted AI tools through vendor procurement processes that emphasised functionality, integration, and cost. Vendor training focused on how to use the tool's interface, not on understanding the AI system's limitations, bias risks, or regulatory implications.
A 2024 survey by the CIPD (the Chartered Institute of Personnel and Development) found that while 45% of HR functions reported using AI tools, only 18% had any formal training in AI governance. A Mercer study from the same period found that 67% of HR leaders felt "unprepared" to evaluate the AI tools their departments had already deployed.
This gap is not hypothetical. In 2023, the U.S. Equal Employment Opportunity Commission settled its first AI discrimination case against iTutorGroup, which used AI recruitment software that automatically rejected female applicants over 55 and male applicants over 60. The HR team using the tool did not understand its filtering logic -- a textbook case of insufficient AI literacy leading to discriminatory outcomes.
Five Steps for HR Article 4 Compliance
1. Inventory your AI systems. Map every tool in your HR technology stack that uses AI or machine learning. Include recruitment platforms, performance tools, learning systems, and workforce analytics. Many HR teams discover they deploy more AI than they realised.
2. Classify risk levels. Cross-reference your inventory against the AI Act's Annex III categories. Any system involved in recruitment, performance evaluation, or employment decisions is likely high-risk, triggering enhanced obligations beyond basic literacy.
3. Assess current literacy. Evaluate your HR team's understanding of each AI system they use. Can they explain, at a functional level, how the system produces its outputs? Can they identify potential bias vectors? Can they articulate the system's limitations? Gaps in these capabilities are Article 4 compliance gaps.
4. Build role-specific training. Generic AI awareness training does not satisfy Article 4's contextual standard. HR professionals need training specific to the AI systems they deploy, the employment decisions those systems influence, and the GDPR and AI Act obligations that apply.
5. Document everything. Article 4 requires demonstrable effort ("take measures"). Maintain records of your AI inventory, risk assessments, training programmes, attendance, and competency evaluations. When regulators ask how you ensured sufficient literacy, documentation is your evidence.
The Intersection Nobody Is Talking About
Article 4 compliance is typically framed as a training obligation. For HR, it is something more fundamental: the people responsible for developing and delivering organisational training programmes are themselves subject to a literacy mandate they may not yet meet.
This creates a recursive challenge. HR must build organisation-wide AI literacy to satisfy Article 4 -- but HR itself needs AI literacy to design effective programmes. The function responsible for compliance is simultaneously a compliance subject.
Organisations that recognise this dual role and invest in HR AI literacy early will build a structural advantage. Their compliance programmes will be designed by people who genuinely understand the systems, the risks, and the regulatory requirements -- rather than by people checking boxes they do not fully comprehend.
For a detailed analysis of Article 4's text and enforcement timeline, see our Article 4 analysis for legal teams. For the broader competence framework that maps AI literacy across organisational levels, explore the Twin Ladder Framework.

