The world's first comprehensive AI regulation. Navigate articles, track implementation deadlines, and understand what matters for legal practice.
11+
Articles
3
Annexes
7
Key for Lawyers
Full Title
Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence
CELEX
32024R1689
OJ Reference
OJ L, 2024/1689, 12.7.2024
Entry into Force
8/1/2024
Full Applicability
8/2/2027
When the EU AI Act's Article 4 mandates "AI literacy" for legal professionals, it is not calling for lawyers to become data scientists or software engineers. Rather, the regulation recognizes a fundamental truth: as artificial intelligence becomes embedded in legal workflows, practitioners must develop sufficient understanding to use these tools competently, ethically, and in compliance with professional duties.
AI literacy in the legal profession means acquiring the skills, knowledge, and understanding necessary to make informed decisions about deploying AI systems in practice. This includes gaining awareness of both the opportunities AI presents and the risks it poses — from efficiency gains to potential ethical violations.
AI tools in litigation now assist with case law research, document review, and predictive analytics. Lawyers need to understand how generative AI can hallucinate non-existent case citations and the importance of verifying every citation and legal proposition generated by AI.
Contract drafting, due diligence, and regulatory compliance increasingly involve AI assistance. AI literacy means understanding how contract analysis AI identifies clauses and recognizing that AI-generated contract language requires human review for appropriateness.
IP practitioners using AI for trademark searches, patent analysis, or copyright assessments need specialized literacy including understanding how AI search tools differ from traditional Boolean searches and recognizing limitations of AI in assessing novelty.
Lawyers advising on regulatory matters require literacy that includes understanding the classification systems for AI risk levels, knowledge of sector-specific AI regulations, and competence in advising on AI-specific contractual provisions.
The Darmstadt Regional Court ruling in Germany set a powerful precedent: when a court-appointed medical expert used AI extensively without disclosure, the court set the expert's fee at zero euros and declared the entire report inadmissible. This case underscores that AI literacy includes understanding when and how to disclose AI use.
Article 4 of the EU AI Act establishes the foundational requirement for all providers and deployers of AI systems. Understanding this regulatory obligation is essential for legal professionals navigating compliance.
Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.
This phrase signals that the obligation is not absolute but reasonable. Regulators recognize that perfect AI literacy is neither achievable nor necessary. For legal professionals, this means that a solo practitioner using basic AI research tools has different literacy requirements than a large law firm deploying AI for high-stakes litigation.
The regulation deliberately avoids prescribing specific training hours, curricula, or certification standards. "Sufficient" is inherently contextual — sufficient for what purpose, in what context, facing what risks? For lawyers, sufficiency means literacy adequate to use AI tools competently, identify when outputs require verification, and recognize professional ethics implications.
The regulation explicitly recognizes that different professionals bring different backgrounds and require different training approaches. Training should build on legal expertise, not assume technical backgrounds, and should address specifically how AI affects legal workflows.
While the EU AI Act establishes a harmonized regulatory framework across member states, its implementation reveals significant variation in how individual countries approach AI regulation for legal professionals.
Italy distinguished itself as the first EU member state to adopt comprehensive national AI legislation. Law 132/2025, effective October 10, 2025, requires Italian lawyers to inform clients whenever AI systems are used in the course of representation, regardless of how minor that use may be.
The November 10, 2025 ruling by the Regional Court of Darmstadt established that a court-appointed expert's fee should be set at zero euros when the expert relied extensively on AI without disclosure. The entire report was declared inadmissible.
Latvia, Lithuania, and Estonia are coordinating their approach to AI regulation implementation, recognizing the cross-border nature of legal services in the Baltic region. This coordination ensures consistency for legal professionals operating across these jurisdictions.
Track implementation status across all 27 member states
The disconnect between how AI tools are developed and how legal professionals must use them creates a fundamental challenge. AI systems are built by engineers who think in terms of algorithms, training data, and model architectures. Yet these tools must be used by lawyers who think in terms of legal precedent, client interests, and professional ethics.
TwinLadder's approach begins with a fundamental recognition: legal professionals are not technical users and should not be trained as if they were. Non-technical users have no background in computer science, think in domain-specific terms, and learn best through application to familiar problems rather than theoretical foundations.
Article 4 focuses on "informed deployment" and "awareness of opportunities and risks" — not technical comprehension. The regulation explicitly considers users' "technical knowledge, experience, education and training," recognizing that non-technical professionals require different literacy than technical professionals. TwinLadder's training is designed precisely for this user profile: experienced legal professionals with strong domain expertise but limited technical background.
Explore specific aspects of AI regulation for legal professionals
Comprehensive analysis of the AI literacy obligation including who is affected, enforcement timelines, penalties, and a 6-step compliance checklist.
Track implementation status across all 27 EU member states. See which countries have implemented national legislation and designated AI authorities.
The EU AI Act classifies AI systems by risk level. Legal AI tools may fall into high-risk or limited-risk categories depending on their use.
AI practices banned outright
* With law enforcement exceptions
Strict requirements apply
See Annex III for full list
Transparency obligations
Must disclose AI use
Voluntary codes apply
No mandatory requirements
These articles have direct implications for law firms, in-house counsel, and legal AI vendors.
Chapter II: Prohibited AI Practices
Bans AI systems for: subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing (individuals), untargeted facial recognition scraping, emotion recognition at work/education, and real-time biometric identification (with law enforcement exceptions).
Relevance to Legal Practice
Legal AI tools are unlikely to fall into prohibited categories, but lawyers should verify tools don't use banned techniques for influence or assessment.
Chapter III: High-Risk AI Systems
Defines what makes an AI system 'high-risk': either (1) a safety component/product under EU harmonisation legislation in Annex I, or (2) falls under use cases in Annex III. Exceptions for narrow procedural tasks.
Relevance to Legal Practice
AI systems used for 'administration of justice and democratic processes' are HIGH-RISK under Annex III(8). Legal research and case outcome prediction tools may qualify.
Chapter III: High-Risk AI Systems
Mandates continuous risk management for high-risk AI: identify risks, implement mitigation, test systems, monitor post-deployment. Must consider reasonably foreseeable misuse.
Relevance to Legal Practice
Lawyers deploying high-risk AI must understand the vendor's risk management. Due diligence should verify compliance.
Chapter III: High-Risk AI Systems
High-risk AI systems must be designed for effective human oversight. Humans must be able to understand outputs, intervene, and override the system. 'Human-in-the-loop' or 'human-on-the-loop' required.
Relevance to Legal Practice
Lawyers MUST maintain oversight of AI outputs. Blind reliance on AI without review violates professional duty and likely this article.
The EU AI Act phases in over three years. Track key milestones and prepare your compliance strategy.
August 1, 2024
The EU AI Act officially enters into force, starting the implementation timeline.
February 2, 2025
Ban on AI systems with unacceptable risk: social scoring, manipulation, real-time biometric identification (with exceptions).
August 2, 2025
EU AI Office fully operational. Rules for general-purpose AI models apply. Penalties framework active.
August 2, 2026
Full compliance required for high-risk AI systems. Conformity assessments, technical documentation, human oversight mandatory.
August 2, 2027
All provisions fully applicable. High-risk AI systems in Annex I must comply.
Navigate the complete EU AI Act structure with legal practice annotations.
Chapter I
Chapter II
Chapter III
Chapter IV
Chapter V
Chapter XII
Annexes define high-risk categories and technical requirements.
Lists EU product safety legislation that, when combined with AI as a safety component, triggers high-risk classification under Article 6(1).
Criteria for determining if a general-purpose AI model poses systemic risk: training compute >10^25 FLOPs, high-impact capabilities, number of users, cross-border reach.
Last updated: 2/6/2026
Official EUR-Lex Source