TwinLadder Weekly
Issue #27 | March 2026
Editor's Note
I spent Tuesday morning at an HR technology showcase in Amsterdam. Three floors of the Beurs van Berlage, two hundred vendors, and a phrase I heard from eleven separate booths: "AI-powered hiring, simplified."
At one stand, a sales representative demonstrated a candidate screening tool to a group of HR managers from a Dutch financial services firm. The system ingested four hundred CVs, produced a ranked shortlist of twenty in under ninety seconds, and displayed confidence scores beside each name. The HR managers nodded. One asked how the scoring worked. The representative smiled and said: "Our algorithm analyses over 200 data points to find the best match." She asked which data points. He pivoted to pricing.
I have attended these events across Europe since 2024. The demonstrations get faster. The dashboards get prettier. The questions from HR teams get more anxious. And the answers from vendors get more evasive. What struck me in Amsterdam was not that HR professionals lack curiosity about the tools they deploy. They are asking. They are just not getting answers they can act on. And that gap -- between deploying an AI system and understanding what it does -- is precisely the gap the EU AI Act was written to close.
HR Is Ground Zero for Article 4
Why the People Function Faces the Toughest AI Competence Challenge in the Organisation
Alex Blumentals, with legal analysis by Liga Paulina
Every corporate function now uses AI. Marketing uses it for content. Finance uses it for forecasting. Legal uses it for research. But only one function has built its entire operational model around the activity the EU AI Act treats as highest-risk: profiling natural persons.
HR departments screen, rank, score, categorise, and predict human behaviour as their core business. They do it at hiring. They do it during employment. They do it at termination. At every stage, they evaluate individuals based on personal characteristics -- the textbook definition of profiling under EU law.
The numbers describe the scale. 82% of companies now use AI to review resumes. [cite:resumebuilder-ai-hiring] Enterprise AI adoption in recruitment hit 78% in 2025, representing 189% growth since 2022. [cite:herohunt-adoption] The global AI recruitment market is projected to reach USD 1.12 billion by 2033. HR is not experimenting with AI. HR is saturated with it.
And here is the figure that should concern every Chief People Officer in Europe: only 30% of HR professionals report having adequate training for the AI tools they deploy. [cite:hr-com-training-gap]
Eighty-two percent adoption. Thirty percent competence. That is not a gap. That is a chasm.
The regulatory architecture that targets HR
Liga Paulina breaks down why HR faces a uniquely dense regulatory burden. The EU AI Act's Annex III lists eight categories of high-risk AI systems. Two of them target HR operations with surgical precision. [cite:annex-iii]
Category 3 -- Education and vocational training covers AI systems used for determining access to training, evaluating learning outcomes, assessing education levels, and monitoring behaviour during assessments. Every corporate learning and development platform that uses AI sits within these provisions. Adaptive learning systems that adjust content based on learner performance. Competence assessments that determine certification levels. Proctoring software that monitors employee tests.
Category 4 -- Employment, workers management and access to self-employment is broader still. It covers the entire employment lifecycle: targeted job advertising, CV screening, candidate evaluation, promotion decisions, termination decisions, task allocation based on individual behaviour, performance monitoring, and behavioural evaluation.
Read those provisions against a typical HR technology stack:
| HR AI Application | Annex III Category | High-Risk? |
|---|---|---|
| CV screening and ranking (HireVue, Harver, Eightfold) | Category 4(a) | Yes |
| Video interview scoring | Category 4(a) | Yes |
| Performance management analytics | Category 4(b) | Yes |
| Attrition prediction models | Category 4(b) | Yes |
| Shift scheduling based on individual metrics | Category 4(b) | Yes |
| AI-powered learning platforms | Category 3(b) and (c) | Yes |
| Employee monitoring and sentiment analysis | Category 4(b) | Yes |
| Workforce planning and headcount forecasting | Category 4(b) | Yes |
There is no common HR AI tool that falls outside these two categories.
The profiling trap
Liga Paulina identifies what she calls the "profiling trap" -- a provision buried in Article 6(3) that closes every escape route the Act otherwise provides. [cite:art-6-3-profiling]
The AI Act offers derogations for Annex III systems that pose limited risk: systems performing narrow procedural tasks, or preparatory tasks for human decisions. Many compliance advisors have latched onto these derogations. The argument runs: our applicant tracking system merely assists recruiters; it performs a preparatory task and escapes high-risk classification.
Then comes Article 6(3)'s final sentence: "An AI system referred to in Annex III shall always be considered to be high-risk where the AI system performs profiling of natural persons."
Profiling, as defined in GDPR Article 4(4), means any automated processing of personal data to evaluate personal aspects -- including analysing or predicting work performance, reliability, behaviour, or interests. Liga is direct: "Every HR AI system I have reviewed profiles by definition. CV screening evaluates candidates based on personal data to predict work performance. That is profiling. Performance analytics processes personal data to evaluate work behaviour. That is profiling. The derogation does not apply. It was never going to apply."
The full enforcement date for high-risk system obligations is 2 August 2026 -- five months from now. Fines reach EUR 15 million or 3% of worldwide annual turnover, whichever is higher.
The bias evidence HR cannot ignore
The competence question is not theoretical. The research on AI hiring bias is now overwhelming -- and it makes the case for why HR teams must understand, not just operate, their AI tools.
| Study | Sample | Finding |
|---|---|---|
| University of Washington (Oct 2024) [cite:uw-resume-bias] | 554 real resumes, 3 AI models | LLMs favoured white-associated names 85% of the time; never favoured Black male names |
| PNAS Nexus (May 2025) [cite:pnas-intersectional-bias] | ~361,000 fictitious resumes, 4 leading models | All models systematically scored Black male candidates lower than white males with identical credentials |
| HireVue facial analysis (removed Jan 2021) [cite:hirevue-facial] | Company's own data | Nonverbal data contributed only ~0.25% predictive power; removed after FTC complaint |
| iTutorGroup/EEOC (Aug 2023) [cite:itutorgroup-eeoc] | 200+ rejected applicants | AI automatically rejected women 55+ and men 60+; discovered when applicant submitted identical applications with different birth dates |
| Mobley v. Workday (2023-2025) [cite:mobley-workday] | 80+ applications, potential millions in class | Applicant rejected every time; one rejection came 55 minutes after applying at 12:55 AM |
These are not edge cases. They are structural. The University of Washington study tested mainstream AI models on real resumes and found systematic racial bias in 85% of rankings. The PNAS Nexus study tested four leading commercial models -- GPT-4o, Gemini, Claude, Llama -- and found the same pattern across 361,000 fictitious resumes. This is not a single vendor problem. It is a technology-layer problem.
And the legal consequences are materialising. Mobley v. Workday was granted conditional certification as a nationwide class action in May 2025, potentially covering millions of job applicants over 40. [cite:mobley-workday] The EEOC submitted a supporting brief establishing that AI service providers -- not just employers -- can be directly liable as agents. The iTutorGroup settlement, the first-ever EEOC case involving AI hiring discrimination, cost $365,000 plus five years of mandatory compliance monitoring. [cite:itutorgroup-eeoc]
These cases all share one characteristic: the humans overseeing the AI systems did not understand what the systems were doing. The competence gap was not incidental to the harm. It was the mechanism.
Europe is not waiting
While US enforcement builds through litigation, Europe is constructing something more systematic.
The Netherlands operates the most advanced algorithm transparency regime in Europe. The Dutch Algorithm Register, operational since 2022, requires government agencies to publicly disclose algorithmic decision-making systems, including employment-related algorithms. [cite:dutch-algorithm-register] More than 300 algorithms are now registered. The Dutch Authority for Consumers and Markets has published market studies identifying inadequate AI training as a consumer protection risk in professional services. For HR teams at Dutch companies and multinationals operating in the Netherlands: the expectation of algorithmic transparency is already the norm.
Germany has the strongest co-determination framework for AI in employment. Under the Works Constitution Act (Betriebsverfassungsgesetz), Works Councils (Betriebsrat) hold mandatory co-determination rights on technical devices designed to monitor or evaluate employee behaviour or performance. [cite:german-betriebsrat] Every AI system that scores, ranks, or monitors employees requires Works Council approval. In practice, this means German Works Councils are already asking the questions that Article 4 will eventually require across the EU: what does this system do, how does it make decisions, and can our employees explain its outputs?
France's data protection authority, the CNIL, published guidance on AI in recruitment in 2024, requiring transparency about AI use, human oversight of automated decisions, and data protection impact assessments for all AI-assisted screening tools. [cite:cnil-ai-recruitment] The CNIL framework effectively treats AI literacy as a precondition for lawful recruitment AI deployment -- a position that aligns with Article 4's mandate.
The Nordic equality bodies have begun examining AI hiring tools through anti-discrimination frameworks. The Swedish Equality Ombudsman (DO) opened an inquiry in late 2025 into algorithmic bias in recruitment platforms used by public-sector employers. Finland became the first Member State with full AI Act enforcement powers in December 2025.
| Jurisdiction | Mechanism | Status | Implication for HR |
|---|---|---|---|
| Netherlands | Algorithm Register + ACM market study | Operational | Algorithmic transparency already expected |
| Germany | Works Council co-determination (BetrVG s.87) | Established law | Every HR AI system needs Betriebsrat approval |
| France | CNIL AI recruitment guidance | Published 2024 | DPIA required before deploying AI screening |
| Sweden | Equality Ombudsman inquiry into algorithmic bias | Opened late 2025 | Anti-discrimination lens on recruitment AI |
| Finland | Full AI Act enforcement authority | December 2025 | First Member State ready to enforce |
| EU-wide | AI Act Annex III, Category 4 | Enforceable August 2026 | Full high-risk obligations for all HR AI |
The enforcement net is not coming. It is here. The question for HR teams is whether they will be ready when regulators move from frameworks to inspections.
The competence question HR must answer
Strip away the regulatory architecture and there is a question every HR director in Europe should be able to answer:
Can your recruiter explain why the AI shortlisted candidate A but not candidate B?
Not "the algorithm decided." Not "the confidence score was higher." Can they explain -- to the rejected candidate, to a Works Council, to a data protection authority, to a tribunal -- what data the system weighed, what factors drove the ranking, why the score came out the way it did, and what limitations apply to that output?
If they cannot, the AI system has no meaningful human oversight. The compliance documentation is decorative. And the organisation is exposed -- not just to fines under the AI Act, but to discrimination claims, GDPR violations, and the reputational cost of deploying technology it cannot explain.
The European Commission's Article 4 Q&A frames AI literacy around the ability to "interpret AI system output in suitable ways." [cite:ec-article4-qa] For HR, "suitable" means understanding the system well enough to catch the biased shortlist, to question the attrition prediction, to override the performance score when the data inputs are wrong.
That is not what most AI training programmes teach HR teams. Most teach them where to click.
The Competence Question
A recruitment manager at a mid-size professional services firm in Frankfurt receives a complaint. A candidate -- male, 52, experienced -- was rejected at the CV screening stage for a senior analyst role. The candidate contacts the firm directly. He applied through the firm's careers portal, which uses an AI-powered screening tool. He wants to know why he was rejected. Under GDPR Article 22, he has a right to meaningful information about the logic involved.
The recruitment manager opens the screening tool's dashboard. The candidate's profile shows a match score of 34 out of 100. The recommended threshold was 60. The system recommended rejection. A junior recruiter had confirmed the recommendation and moved to the next batch.
The recruitment manager looks for an explanation. The dashboard shows weighted factors: "experience relevance," "skills alignment," "cultural fit prediction." No further detail. She contacts the vendor. The vendor's support team explains that the model analyses over 150 features extracted from the CV and produces a composite score. They cannot disclose the specific weighting. Proprietary algorithm.
She now faces a candidate who wants an explanation, a Works Council that has flagged the tool for review, and a data protection officer asking whether a DPIA was conducted before deployment. She attended the firm's Article 4 compliance training in October 2025. It was a two-hour session covering what AI is and how to write prompts for the firm's document tools. Nobody mentioned that the recruitment screening system was an AI system subject to high-risk classification.
The training programme gave her a certificate. It did not give her the competence to answer any of the questions now on her desk.
What To Do
-
Map every AI system in your HR technology stack against Annex III. Not just the obvious ones -- the ATS, the video interview platform. Include the workforce analytics dashboard, the learning management system, the scheduling algorithm, the employee engagement survey tool. If it processes personal data to evaluate, rank, score, or predict, it is almost certainly a high-risk AI system under the profiling clause. Most HR teams we speak with have identified two or three systems. The real number is usually eight to twelve.
-
Test your team's explanation capability. Take your most-used AI screening tool. Select a recent candidate who was rejected. Ask the recruiter who processed the rejection to explain -- in plain language, without the dashboard open -- why the system recommended rejection and what limitations apply to that recommendation. If they cannot, you have measured the competence gap. Document it. That gap is your Article 4 exposure.
-
Demand technical transparency from your HR AI vendors. Request the conformity assessment documentation, the bias audit methodology, the data governance practices, and the human oversight design for each high-risk system you deploy. Under the AI Act's deployer obligations (Articles 26-27), you are required to keep audit logs and conduct fundamental rights impact assessments. You cannot do this if your vendor will not explain how the system works. If a vendor refuses transparency, that tells you something important about the system you are trusting with employment decisions.
-
Build AI competence into HR professional development -- not as a one-off, but as ongoing capability. Article 4 requires a "sufficient level" of literacy that is proportionate to the role and the risk. For HR teams deploying high-risk AI systems across the employment lifecycle, "sufficient" is a high bar. It means understanding bias mechanisms, verification practices, and the limits of probabilistic scoring. Quarterly competence assessments -- not annual certificate renewals -- are the appropriate cadence.
Quick Reads
-
EU AI Act, Annex III — High-Risk AI Systems List — the actual regulatory text that classifies employment and recruitment AI as high-risk. If you work in HR and have not read Category 4, start here. It is shorter than you expect and more specific than most summaries suggest.
-
Dutch Algorithm Register — the most transparent algorithmic accountability system in Europe. Over 300 registered algorithms. Browse it to understand what algorithmic transparency looks like in practice, and to see what your organisation's disclosure might look like when the rest of Europe catches up.
-
University of Washington, AI Resume Screening Bias Study (October 2024) — the study that found LLMs favour white-associated names 85% of the time and never favour Black male names. Essential reading for any HR team that uses AI in candidate screening.
-
NYC Local Law 144 Enforcement Audit (December 2025) — the cautionary tale. NYC required annual bias audits for automated hiring tools in 2023. By 2025, enforcement had identified only 1 non-compliance case out of 32, while independent reviewers found 17. The lesson: compliance regimes without teeth are performative. [cite:nyc-ll144-audit]
-
CNIL, AI and Recruitment Guidance — France's data protection authority on what lawful AI-assisted recruitment looks like. The most practical European regulatory guidance available for HR teams today.
One Question
Your AI screening tool rejected a candidate this morning. A data protection authority asks your recruiter -- not your legal team, not your vendor, but the person who confirmed the rejection -- to explain how the system scored the candidate and why. Can they? And if they cannot, what does that tell you about the difference between deploying AI and understanding it?
TwinLadder Weekly | Issue #27 | March 2026
Helping professionals build AI capability through honest education.
