Assess, Learn, Apply, Certify: How Adults Actually Build AI Competence
Most AI training treats professionals like empty vessels. Adult learning science says this is exactly wrong -- and the EU AI Act agrees.
Picture a senior litigation partner with twenty-three years of experience sitting through a webinar on how neural networks process tokens. She already knows how to evaluate reasoning, assess credibility, and spot a flawed argument. What she needs is to understand how those skills apply when the research memo was written by a machine.
The training industry has largely ignored this. It offers awareness sessions that create familiarity without competence, or technical programmes designed for engineers. Neither works for accomplished professionals who need to integrate AI into existing expertise -- not replace it.
Twin Ladder's four-phase methodology rests on a different premise, grounded in decades of adult learning research: professionals learn by solving problems that matter to them, not by absorbing theory they cannot connect to practice.
Phase 1: Assess
Malcolm Knowles established the foundational insight of andragogy in the 1970s: adults bring experience that is itself a learning resource. Ignoring it does not merely waste time. It undermines motivation.
Article 4 of the EU AI Act encodes this into law, requiring literacy measures that take into account each person's "technical knowledge, experience, education and training." A one-size-fits-all programme is not just pedagogically weak -- it risks non-compliance.
So the methodology begins with assessment, not instruction. A ten-to-fifteen-minute diagnostic where each practitioner evaluates AI-generated legal research and answers targeted questions about their practice context. Current AI exposure. Concerns around confidentiality or citation reliability. Workflow patterns.
A privacy lawyer worried about data handling follows a fundamentally different path from a litigator who needs to verify case citations. Assessment makes these distinctions operational. Practitioners skip what they have mastered and focus on actual gaps. The deeper payoff is engagement: adults commit when the programme demonstrates it understands who they are.
Phase 2: Learn
Here is what we do not teach: transformer architectures, attention mechanisms, backpropagation, or anything requiring a computer science background. These are irrelevant to the competence Article 4 demands.
The curriculum covers what AI does in legal practice and how to work with it responsibly, delivered through micro-modules of ten to fifteen minutes each -- a deliberate choice rooted in cognitive load theory. Working memory is limited. Shorter modules with immediate application produce better retention than marathon sessions.
Each module follows a consistent rhythm: conceptual frame, realistic scenario, guided application, takeaways. All anchored in the legal domain practitioners already understand, which dramatically reduces the cognitive overhead of processing unfamiliar material.
Verification frameworks form the backbone. How to check citations. How to evaluate whether AI-generated reasoning holds. How to assess completeness. Beyond verification: risk assessment, ethical decision-making with confidential data, and workflow integration. Every module connects to situations practitioners encounter in their next working week. Adult learning research is emphatic: immediate relevance drives engagement. Training perceived as theoretical loses adults within minutes.
Phase 3: Apply
Knowledge that stays theoretical decays. Research on transfer of learning shows one critical variable: contextualised application with feedback.
A practitioner receives an AI-generated research memo and must verify citations, evaluate reasoning, and identify gaps. Immediate feedback reveals what was caught and what was missed. Subsequent exercises escalate: risk assessment scenarios, ethical decisions involving confidential data, client communications balancing transparency with clarity.
The environment is deliberately mistake-friendly. Adult learning theory distinguishes between performance orientation -- proving competence -- and mastery orientation -- building it. Mistake-friendly environments promote mastery, producing deeper learning and greater willingness to tackle difficult material.
The capstone asks practitioners to design an AI-augmented workflow for their own practice area: what AI handles, where humans intervene, where quality checkpoints sit. Something they can implement the following Monday.
Phase 4: Certify
Most programmes issue certificates of attendance -- documenting presence, not competence. Twin Ladder's certification requires demonstrated capability.
Scenario analysis constitutes sixty per cent: a complex situation involving multiple AI use decisions where the practitioner identifies issues, assesses risks, and proposes responses. Professional standards account for twenty per cent: regulatory requirements, disclosure obligations, verification protocols. Practical competence makes up the final twenty per cent: evaluating actual AI-generated content, identifying problems, recommending corrections.
For regulators, this provides evidence that Article 4's standard has been met through verified assessment. For bar associations, it qualifies for CPD credits. For clients and employers, it distinguishes demonstrated competence from mere claims. And because AI evolves, recertification ensures literacy remains current.
The Science Underneath
Beneath the four phases runs a principle most AI training ignores: professional identity shapes how adults engage with new learning.
Lawyers who perceive AI training as threatening their expertise resist it -- not irrationally. They are protecting something valuable. Training that positions AI as making existing legal competencies more valuable, not less, produces fundamentally different engagement.
This is the competence-confidence loop. Training that builds on existing expertise develops AI competence, which builds confidence, which enables more sophisticated practice, which builds further competence. Training that devalues existing expertise prevents the loop from initiating at all.
The four phases sustain this loop by design. Assessment acknowledges what practitioners know. Learning extends it. Application demonstrates growing capability. Certification validates the result.
Article 4 does not ask whether training happened. It asks whether literacy is sufficient. Sufficiency is measured by capability demonstrated, not hours logged.
Ready to find out where you stand? Take our free AI readiness assessment

