The Six-Year Countdown: Every EU AI Act Deadline Your Organisation Needs to Know
By Alex Blumentals — Twin Ladder
Most compliance teams are preparing for the EU AI Act. They should be complying with it. Article 4 and Article 5 became enforceable on 2 February 2025. If you are reading this in March 2026, you are already thirteen months late on your first obligation.
The EU AI Act is not a single regulation that lands on a single date. It is a six-year implementation programme with nine distinct deadlines, each activating different obligations for different actors. The timeline runs from August 2024 to December 2030 — longer than most organisations' strategic planning horizons.
This matters because the compliance conversation in most boardrooms is still framed around "when the AI Act arrives." It arrived. The question now is which obligations have already been missed, which are imminent, and which require multi-year preparation that should have started yesterday.
Here is every deadline, what it requires, and what it means in practice.
What Has Already Happened
12 July 2024 — Publication
The AI Act was published in the Official Journal of the European Union as Regulation (EU) 2024/1689. From this point, it became binding law.
1 August 2024 — Entry into Force
The regulation formally entered into force twenty days after publication. No substantive obligations applied yet, but the clock started ticking on every phased deadline. The AI Office within the European Commission began its establishment. Standards development bodies accelerated their work.
2 February 2025 — The First Obligations (Already Live)
This is the deadline most organisations have missed.
Chapters I and II became applicable. Two articles now carry legal force:
Article 4 — AI Literacy. Every provider and deployer of AI systems must ensure that their staff and other persons dealing with the operation and use of AI systems have a sufficient level of AI literacy. The text is deliberately broad: it covers technical understanding, awareness of risks and limitations, and knowledge of the context in which the system operates. This is not limited to IT departments. It applies to every function that touches AI — legal, HR, finance, operations, procurement.
"Sufficient" is not defined by a certification or a number of training hours. It is calibrated to the context: the role, the technology, and the risks involved. For a lawyer using an AI research tool, sufficient literacy is different from a call centre operator using a chatbot. But both require it, and both are subject to enforcement.
Article 5 — Prohibited AI Practices. Seven categories of AI systems are banned outright, with no grace period and no transitional relief:
- Subliminal, manipulative, or deceptive techniques causing significant harm
- Exploitation of vulnerabilities due to age, disability, or socioeconomic situation
- Social scoring by public authorities
- Predictive policing based solely on profiling
- Untargeted scraping of facial images for recognition databases
- Emotion recognition in workplaces and educational institutions (with narrow exceptions)
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
If your organisation operates any system in these categories, it is already in violation. There is no upcoming compliance date to wait for.
What Is Happening Now
2 May 2025 — GPAI Codes of Practice
The AI Office was expected to finalise Codes of Practice for general-purpose AI (GPAI) models by this date, giving providers of foundation models and large language models practical guidance on their obligations. The final version of the GPAI Code of Practice was published on 10 July 2025.
Member States must also identify and publicly list the authorities and bodies responsible for fundamental rights protection, and notify the Commission and other Member States.
2 August 2025 — GPAI Obligations and Governance
This is the twelve-month mark. Several major frameworks activate:
General-Purpose AI Model Obligations (Chapter V, Articles 51-56). All GPAI model providers must maintain technical documentation, provide information to downstream providers, comply with copyright law including opt-out mechanisms, and publish a sufficiently detailed summary of their training data.
Providers of GPAI models with systemic risk — currently defined as models trained with more than 10^25 FLOPs of compute — face additional obligations: model evaluations, systemic risk assessment and mitigation, adversarial testing (red-teaming), cybersecurity measures, serious incident reporting, and energy consumption disclosure.
Governance Structure (Chapter VII, Articles 64-70). The full governance architecture must be operational: the AI Office, the AI Board (Member State representatives), the Scientific Panel of independent experts, and the Advisory Forum (stakeholder body).
Penalty Frameworks (Chapter XII, Articles 99-100). Member States must adopt national rules on penalties. However, fines specifically for GPAI model providers (Article 101) do not yet apply — that comes later.
National Authorities. Member States must designate and notify their national competent authorities to the Commission.
Transition rule: GPAI models placed on the market before 2 August 2025 receive a grace period — providers must comply by 2 August 2027.
What Is Coming
2 February 2026 — Commission Guidelines (Missed)
The Commission was required to provide guidelines specifying the practical implementation of Article 6 — the rules for classifying high-risk AI systems — including a comprehensive list of practical examples of high-risk and non-high-risk use cases.
The Commission missed this deadline. As IAPP reported, the final draft guidelines were expected by end of February 2026 but had not been published at the time of writing.
This matters enormously. Organisations trying to determine whether their AI systems qualify as high-risk under Annex III are operating without the classification guidance that was supposed to help them prepare for the August 2026 enforcement date. The window for preparation is shrinking while the guidance remains absent.
2 August 2026 — The General Application Date
This is the cliff.
The bulk of the AI Act's obligations take effect on this date. Every organisation that develops, deploys, or distributes AI systems in the EU must be compliant. Here is what activates:
High-Risk AI Systems (Annex III). AI systems in eight designated areas become subject to the full high-risk compliance regime. The areas are: biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential services, law enforcement, migration and border control, and administration of justice. For each, providers must implement:
- Risk management (Article 9) — a continuous, documented process throughout the system lifecycle
- Data governance (Article 10) — training and testing data must meet quality criteria
- Technical documentation (Article 11) — drawn up before the system is placed on the market
- Record-keeping (Article 12) — automatic logging of events relevant to risk identification
- Transparency (Article 13) — clear information provided to deployers
- Human oversight (Article 14) — designed into the system, not bolted on
- Accuracy, robustness, and cybersecurity (Article 15)
Deployers face their own obligations under Article 26: follow the provider's instructions for use, assign competent human overseers, monitor operations, report incidents, and — for public bodies and certain private entities — conduct fundamental rights impact assessments (Article 27) before deployment.
Transparency Obligations (Article 50). AI systems that interact with people must make it clear the person is interacting with AI. Providers of synthetic content (audio, image, video, text) must ensure outputs are machine-readable as artificially generated. Deployers of emotion recognition and deepfake systems must inform affected persons.
Regulatory Sandboxes. Every Member State must have at least one operational AI regulatory sandbox by this date.
Full Enforcement and Penalties. The penalty structure becomes enforceable:
| Violation | Maximum Fine |
|---|---|
| Prohibited practices (Article 5) | €35 million or 7% of global annual turnover |
| High-risk non-compliance | €15 million or 3% of global annual turnover |
| Misleading information to authorities | €7.5 million or 1% of global annual turnover |
SMEs and startups receive proportionate reductions. Article 101 — fines for GPAI model providers — also becomes enforceable at this point.
2 August 2027 — Safety Components in Regulated Products
The thirty-six-month mark extends high-risk classification to AI systems used as safety components of products covered by EU harmonisation legislation listed in Annex I. This covers AI embedded in:
- Medical devices and in-vitro diagnostics
- Machinery and equipment
- Toys
- Lifts and pressure equipment
- Radio equipment
- Motor vehicles and trailers
- Civil aviation systems
- Marine equipment
- Railway interoperability systems
- Personal protective equipment
This is also the deadline for GPAI models that were on the market before August 2025 — they must be fully compliant by this date.
After 2 August 2027, the AI Act is fully applicable across all categories and risk levels.
31 December 2030 — The Final Deadline
Two categories of legacy systems receive extended transitional periods:
Large-scale IT systems listed in Annex X — Union legislative systems in the area of freedom, security, and justice (the Schengen Information System, Visa Information System, Eurodac, Entry/Exit System, ETIAS, ECRIS-TCN) — that were placed on the market before August 2027 must achieve full compliance by this date.
High-risk AI systems used by public authorities that were deployed before August 2026 must also take steps to comply by December 2030.
The Timeline at a Glance
| Date | What Happens |
|---|---|
| 12 Jul 2024 | Published in Official Journal |
| 1 Aug 2024 | Entry into force |
| 2 Feb 2025 | Article 4 (AI literacy) + Article 5 (prohibited AI) — enforceable now |
| 2 May 2025 | GPAI Codes of Practice due |
| 2 Aug 2025 | GPAI obligations, governance structure, penalty frameworks |
| 2 Feb 2026 | Commission high-risk classification guidelines (deadline missed) |
| 2 Aug 2026 | High-risk obligations, transparency, deployer duties, full enforcement |
| 2 Aug 2027 | Safety components in regulated products, GPAI transition ends |
| 31 Dec 2030 | Large-scale IT systems + public authority legacy systems |
The Competence Gap No One Is Talking About
Here is the uncomfortable arithmetic. Article 4 requires AI literacy now. High-risk compliance requires risk management, human oversight, and fundamental rights impact assessments by August 2026. You cannot conduct a meaningful risk assessment of a system you do not understand. You cannot provide human oversight without the competence to evaluate what the system is doing. You cannot assess fundamental rights implications without understanding how the AI processes data, where bias enters, and what the system's failure modes look like.
AI literacy is not a separate compliance workstream. It is the prerequisite for every other obligation in the Act. Organisations that treat Article 4 as a training checkbox and Article 9-15 as a separate project will discover, painfully, that the two are inseparable.
The Commission missed its own deadline for publishing classification guidelines. That delay does not give anyone more time. It gives less certainty about what high-risk means while the clock to comply with high-risk requirements continues to tick.
What to Do Right Now
If you have done nothing: Start with an AI systems inventory. List every AI system your organisation uses — not just the ones IT procured, but the ones individual teams subscribed to. Many organisations discover they have three to five times more AI deployments than their IT department knows about.
If you have an inventory: Classify each system against the Annex III categories. Recruitment AI, credit scoring, educational assessment tools — if it touches any of the eight high-risk areas, it is probably high-risk. Remember: under Article 6(3), any AI system that performs profiling of natural persons is always classified as high-risk, regardless of other derogations.
If you have classified your systems: Build your Article 4 literacy programme now, calibrated to roles. The people who operate high-risk systems need deeper competence than those who use low-risk tools. But everyone needs a baseline.
If you are already building literacy: Start your risk management documentation (Article 9), vendor due diligence for technical documentation (Article 11), and human oversight protocols (Article 14). August 2026 is seventeen months away. For organisations with complex AI deployments across multiple functions, that is not a lot of time.
The six-year countdown started in August 2024. Almost two of those years are already behind us. The regulation will not wait for your readiness programme to catch up.
Sources
-
EU AI Act Official Text — Regulation (EU) 2024/1689, published in the Official Journal of the European Union, 12 July 2024. eur-lex.europa.eu
-
Article 113 — Entry into Force and Application — Timeline provisions of the EU AI Act. artificialintelligenceact.eu
-
Implementation Timeline — EU Artificial Intelligence Act resource with visual timeline. artificialintelligenceact.eu
-
IAPP — Commission Misses Deadline for AI Act Guidance on High-Risk Systems — Report on the missed February 2026 guidelines deadline. iapp.org
-
AI Act Service Desk — Timeline — European Commission's official implementation timeline resource. ai-act-service-desk.ec.europa.eu
-
Annex III — High-Risk AI Systems — Full list of high-risk AI system areas referred to in Article 6(2). artificialintelligenceact.eu

