The Broken Learning Ladder: AI Is Removing the Work That Built Expertise
The Big Four cut 5,000 graduate roles in one year. The financial logic is clear. The competence consequence is not.
The experiment is already running
EY cut graduate intake by 11 percent after deploying 150 AI tax agents. KPMG slashed hiring 29 percent alongside a multi-agent AI platform. Deloitte cut 18 percent. PwC plans to reduce U.S. entry-level hiring by a third over three years, with audit positions down 39 percent by 2028.
Five thousand fewer graduate positions across the Big Four in a single year. The financial logic is straightforward: when AI can match a two-year analyst at a fraction of the cost, the investment case for junior hiring collapses.
But this story is not about jobs. It is about what those jobs produced.
The rung was the training
A graduate who joined an audit firm used to spend her first week building bank reconciliations by hand — cross-referencing ledger entries against statements, learning where numbers hide, developing the instinct for where they do not add up. She would make mistakes. A senior would catch them and explain why. By month's end, she understood financial statements at the level that only comes from handling every bone.
Now she reviews AI-generated output. The reconciliation is done before she arrives. She flags nothing, because she does not yet know what an anomaly looks like. She has never built a reconciliation from the ground up.
She is standing on a ladder with the bottom rung removed.
Jean Lave and Etienne Wenger described expertise as something formed through legitimate peripheral participation — gradual immersion into a community of practice. You become competent by doing the work, alongside people who know what the work means. The grunt work was never just labour. It was the mechanism through which judgment formed.
AI is automating the periphery. And the periphery is where learning lived.
The evidence is consistent across domains
This is not limited to professional services.
Medicine: Gastroenterologists who worked with an AI detection system for eighteen months lost 21 percent of their unassisted diagnostic accuracy when the system was removed. Not compared to AI-assisted rates — compared to their own pre-AI baseline.
Aviation: Pilots relying on autopilot cannot meet manual instrumentation standards when it fails. The degradation is worst in emergencies — the exact scenarios where manual skill matters most.
Software: Junior developers using AI coding tools showed a 17-point drop in comprehension. Code churn doubled across 153 million lines after AI adoption — developers accepting code they did not understand, then debugging it.
The pattern holds everywhere it has been studied: the tool that makes you better at the task makes you worse without it.
The supervision paradox
Organisations need expert humans to oversee AI. But AI is removing the pathway through which experts develop.
You cannot supervise AI-generated audit work if you never learned to audit. You cannot catch errors in AI-generated contracts if you never drafted one. The Big Four need seniors who validate AI output. They are cutting the programmes that produce seniors.
Fifty-five percent of employers who reduced headcount through AI already regret the decision. IBM went the other direction — tripling entry-level hiring because it recognised that removing junior roles destroyed the pipeline for senior expertise.
Article 4 addresses part of the problem
The EU AI Act, Article 4, requires organisations deploying AI to ensure "sufficient AI literacy" for all staff. Enforceable since February 2025. It is the first regulation tying AI deployment to human competence.
But Article 4 assumes competence can be trained. The missing rung reveals that competence is formed through practice — through doing the work that AI is removing. A twelve-week course on evaluating AI output does not replace three years of building reconciliations by hand.
This is the structural flaw in the competence conversation. Training transfers knowledge. But judgment requires the kind of effortful, context-rich practice that AI is designed to eliminate.
Rebuilding the rung
The Twin Ladder framework addresses this at three levels:
Prediction-first workflows. Before AI produces output, the human makes their own assessment. Then they compare. The learning lives in the delta between human judgment and AI output. This is not slower — it is different. The human practises judgment rather than rubber-stamping.
Deliberate rotation. Regularly remove AI assistance for specific tasks. The gap between AI-assisted and unassisted performance is your competence debt. If it grows, your people are losing capability regardless of what their certificates say.
Synthetic apprenticeship. Use AI as a training environment, not just a production tool. Digital twins of real engagements where juniors build reconciliations, draft contracts, and make mistakes in realistic contexts. The Twin Ladder maps this across four levels — from individual mirroring (Level 1) through operational simulation (Level 2). The ladder is climbed, not skipped.
Measure competence, not completion. Most organisations track training module completion. Almost none track whether people can identify errors in AI output. The second metric is the only one that matters — both under Article 4 and for the long-term capability of your workforce.
The question that matters
Every AI conversation focuses on cost, productivity, and compliance. Nobody asks what happens to people's judgment over time. Not their jobs — their ability to think independently about the work the AI is doing.
The grunt work was the training. Automate it without replacing the learning function, and you build a workforce that is faster, more productive, and progressively less capable of catching the mistakes that matter.
That is competence debt. It compounds silently. And almost nobody is accounting for it.
This article draws on "The Missing Rung: How AI Is Dismantling the Career Ladder" (Twin Ladder, 2026). Take the free AI competence assessment at twinladder.ai/en/assess.
