The world's first comprehensive AI regulation. Navigate articles, track implementation deadlines, and understand what matters for legal practice.
11+
Articles
3
Annexes
7
Key for Lawyers
Full Title
Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence
CELEX
32024R1689
OJ Reference
OJ L, 2024/1689, 12.7.2024
Entry into Force
8/1/2024
Full Applicability
8/2/2027
Kad ES MI Akta 4. pants nosaka "MI pratibu" juridiskajiem specialistiem, tas neaicina juristus klut par datu zinatniekiem vai programmaturos inzenieriem. Regulejums atzist fundamentalu patiesibu: makslIgajam intelektam klustot par dalu no juridiskajam darbplusmam, praktikiem jaattista pietiekama izpratne, lai sos rikus lietotu kompetenti, etiski un atbilstosi profesionalajiem pienakumiem.
MI pratiba juridiskaja profesija nozime prasmju, zinasanu un izpratnes ieguvsanu, kas nepieciesamas informetu lemumu pienemsanai par MI sistemu ieviesanu prakse. Tas ietver izpratni gan par iespejam, ko MI sniedz, gan par riskiem, ko tas rada — no efektivitates ieguvumiem lidz iespejamiem etikas parkapumiem.
MI riki tiesvediba tagad palidz ar judikaturos izpeti, dokumentu parskatisanu un prognozejoso analitiku. Juristiem ir jasaprot, ka generativais MI var radit neesosas lietu atsauces, un cik svarigi ir parbaudit katru MI generetu citatu un juridisko apgalvojumu.
Ligumu sagatavovsana, pienaciga parbaude un regulativa atbilstiba arvien vairak ietver MI palidzibu. MI pratiba nozime izprast, ka ligumu analizes MI identifce klauzulas, un atzit, ka MI genereta liguma valoda prasa cilveka parskatisanu atbilstibas novertesanai.
Intelektuala ipasuma praktikiem, kas izmanto MI precu zimju meklesanai, patentu analizei vai autortiesIbu novertesanai, ir nepieciesama specializeta pratiba, tostarp izpratne par to, ka MI meklesanas riki atskiras no tradicionalajam Bula meklesanam.
Juristiem, kas konsulte regulativajos jautajumos, ir nepieciesama pratiba, kas ietver MI riska limenu klasifikacijas sistemu izpratni, zinasanas par nozarei specifiskiem MI regulejumiem un kompetenci konsultet par MI specifiskiem ligumisku noteikumiem.
Darmstates Apgabaltiesas spriedums Vacija radija specIgu precedentu: kad tiesa nozimets medicinas eksperts plasi izmantoja MI bez atklasanas, tiesa noteica eksperta honoraru nulles eiro apmera un pasludinaja visu zinojumu par nepielauJamu. Si lieta uzsver, ka MI pratiba ietver izpratni par to, kad un ka atklat MI izmantosanu.
ES MI Akta 4. pants nosaka pamatprasibu visiem MI sistemu nodrosinataJiem un izvietotajiem. Sis regulativas saistibas izpratne ir butiska juridiskajiem specialistiem, kas orientejas atbilstibas jautajumos.
MI sistemu nodrosinataJi un izvietotaJi veic pasakumus, lai pec iespejam nodrosinatu pietiekamu MI pratibas limeni savam personalam un citam personam, kas nodarbojas ar MI sistemu darbibu un izmantosanu vinu varda, nemot vera so personu tehniskas zinasanas, pieredzi, izglitibu un apmacibu un kontekstu, kada MI sistemas ir paredzets izmantot, un nemot vera personas vai personu grupas, uz kuram MI sistamas ir paredzets izmantot.
Si fraze norada, ka pienakums nav absoluts, bet sapratigs. Regulatori atzist, ka perfekta MI pratiba nav ne sasniedama, ne nepieciesama. Juridiskajiem specialistiem tas nozime, ka atsevisKam praktikam, kas izmanto pamata MI izpetes rikus, ir atskirigas pratibas prasibas neka lielam advokatu birojam.
Regulejums apzinati izvairas noteikt konkretas apmacibu stundas, macibu programmas vai sertifikacijas standartus. "Pietiekams" ir kontekstuAls — pietiekams kadam merkim, kada konteksta, saskaroties ar kadiem riskiem? Juristiem pietiekamiba nozime pratibu, kas ir adekvatA MI riku kompetentai izmantosanai.
Regulejums skaidri atzist, ka dazadiem specialistiem ir atskiriga pieredze un viNiem nepieciesamas atskirigas apmacibu pieejas. Apmacibai jalastas uz juridisko ekspertizi, nevis japieNem tehniska pieredze.
Lai gan ES MI Akts izveido harmonizetu regulativo ietvaru dalibvalstis, ta ieviesana atklaj butiskas atskirigas taja, ka atseviskas valstis pieiet MI regulesanai juridiskajiem specialistiem.
Italija izcelas ka pirma ES dalibvalsts, kas pienema visaptverosa valsts MI likumdosanu. Likums 132/2025, kas stajas speka 2025. gada 10. oktobri, prasa Italijas juristiem informet klientus ikreiz, kad parstavibas gaita tiek izmantotas MI sistemas.
Darmstates Apgabaltiesas 2025. gada 10. novembra spriedums noteica, ka tiesa nozimeta eksperta honorars janosaka nulles eiro apmera, ja eksperts plasi palaujas uz MI bez atklasanas.
Latvija, Lietuva un Igaunija koordine savu pieeju MI regulejuma ieviesanai, atzistot juridisko pakalpojumu parrobezu raksturu Baltijas regiona.
Izsekot ieviesanas statusu visas 27 dalibvalstis
Atrautiba starp to, ka MI riki tiek izstradati, un to, ka juridiskajiem specialistiem tie ir jaizmanto, rada fundamentalu izaicinajumu. MI sistemas veido inzenieri, kas doma algoritmu, apmacibu datu un modelu arhitekturu kategorijas. Tomer sos rikus ir jaizmanto juristiem, kas doma juridisko precedentu, klientu interesu un profesionalas etikas kategorijas.
TwinLadder pieeja sakas ar fundamentalu atzisanu: juridiskie specialisti nav tehniski lietotaji un vini nav jamaca ta, it ka vini tadi butu. Netehniskiem lietotajiem nav datorzinatnu pieredzes, vini doma nozarei specifiskos terminos un vislabak macas, pielietojot zinasanas pazistamam problemam.
4. pants fokuseas uz "informetu ieviesanu" un "izpratni par iespejam un riskiem" — nevis uz tehnisko izpratni. Regulejums skaidri nem vera lietotaju "tehniskas zinasanas, pieredzi, izglitibu un apmacibu". TwinLadder apmaciba ir precizi izstradatA sim lietotaja profilam.
Izpetit specifiskus MI regulacijas aspektus juridiskajiem profesionaliem
Visaptverosa MI pratibas pienakuma analize, tostarp kurs ir skarts, izpildes grafiki, sodi un 6 solu atbilstibas kontrolsaraksts.
Izsekot ieviesanas statusu visas 27 ES dalibvalstis. Skatit, kuras valstis ir ieviesusas valsts likumdosanu un nozimejusas MI iestades.
The EU AI Act classifies AI systems by risk level. Legal AI tools may fall into high-risk or limited-risk categories depending on their use.
AI practices banned outright
* With law enforcement exceptions
Strict requirements apply
See Annex III for full list
Transparency obligations
Must disclose AI use
Voluntary codes apply
No mandatory requirements
These articles have direct implications for law firms, in-house counsel, and legal AI vendors.
Chapter II: Prohibited AI Practices
Bans AI systems for: subliminal manipulation, exploitation of vulnerabilities, social scoring, predictive policing (individuals), untargeted facial recognition scraping, emotion recognition at work/education, and real-time biometric identification (with law enforcement exceptions).
Relevance to Legal Practice
Legal AI tools are unlikely to fall into prohibited categories, but lawyers should verify tools don't use banned techniques for influence or assessment.
Chapter III: High-Risk AI Systems
Defines what makes an AI system 'high-risk': either (1) a safety component/product under EU harmonisation legislation in Annex I, or (2) falls under use cases in Annex III. Exceptions for narrow procedural tasks.
Relevance to Legal Practice
AI systems used for 'administration of justice and democratic processes' are HIGH-RISK under Annex III(8). Legal research and case outcome prediction tools may qualify.
Chapter III: High-Risk AI Systems
Mandates continuous risk management for high-risk AI: identify risks, implement mitigation, test systems, monitor post-deployment. Must consider reasonably foreseeable misuse.
Relevance to Legal Practice
Lawyers deploying high-risk AI must understand the vendor's risk management. Due diligence should verify compliance.
Chapter III: High-Risk AI Systems
High-risk AI systems must be designed for effective human oversight. Humans must be able to understand outputs, intervene, and override the system. 'Human-in-the-loop' or 'human-on-the-loop' required.
Relevance to Legal Practice
Lawyers MUST maintain oversight of AI outputs. Blind reliance on AI without review violates professional duty and likely this article.
The EU AI Act phases in over three years. Track key milestones and prepare your compliance strategy.
August 1, 2024
The EU AI Act officially enters into force, starting the implementation timeline.
February 2, 2025
Ban on AI systems with unacceptable risk: social scoring, manipulation, real-time biometric identification (with exceptions).
August 2, 2025
EU AI Office fully operational. Rules for general-purpose AI models apply. Penalties framework active.
August 2, 2026
Full compliance required for high-risk AI systems. Conformity assessments, technical documentation, human oversight mandatory.
August 2, 2027
All provisions fully applicable. High-risk AI systems in Annex I must comply.
Navigate the complete EU AI Act structure with legal practice annotations.
Chapter I
Chapter II
Chapter III
Chapter IV
Chapter V
Chapter XII
Annexes define high-risk categories and technical requirements.
Lists EU product safety legislation that, when combined with AI as a safety component, triggers high-risk classification under Article 6(1).
Criteria for determining if a general-purpose AI model poses systemic risk: training compute >10^25 FLOPs, high-impact capabilities, number of users, cross-border reach.
Last updated: 2/6/2026
Official EUR-Lex Source