EU AI Act Compliance Checklist 2026 — The Complete Guide
EU AI Act Compliance Checklist 2026 — The Complete Guide
The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. If your organization develops, deploys, or distributes AI systems that touch the European market, compliance is no longer optional — it’s the law.
With the high-risk obligations taking full effect in August 2026, mid-market companies face a narrow window to get their house in order. This guide provides a practical, step-by-step compliance checklist so you know exactly what to do, when to do it, and what’s at stake if you don’t.
What Is the EU AI Act?
The EU AI Act is a regulation — not a directive — meaning it applies directly across all 27 EU member states without requiring national transposition. It regulates AI systems based on the risk they pose to health, safety, and fundamental rights.
The Act applies to:
- Providers (developers) of AI systems placed on the EU market, regardless of where they are established
- Deployers (users) of AI systems within the EU
- Importers and distributors who bring AI systems into the EU market
- Product manufacturers who integrate AI into products covered by EU harmonised legislation
If your AI system’s output is used in the EU — even if your company is headquartered in San Francisco, Singapore, or São Paulo — the Act applies to you.
Not sure if the EU AI Act applies to you? — Take the free EU AI Act assessment and find out in under 5 minutes.
The Enforcement Timeline
The EU AI Act entered into force on 1 August 2024, but obligations phase in over a staggered timeline:
| Date | What Takes Effect |
|---|---|
| 2 February 2025 | Prohibitions on unacceptable-risk AI practices (Article 5) |
| 2 August 2025 | Obligations for general-purpose AI (GPAI) models (Chapter V) |
| 2 August 2026 | Full obligations for high-risk AI systems (Chapter III, Articles 6–51) |
| 2 August 2027 | Obligations for high-risk AI systems embedded in products covered by Annex I EU harmonised legislation |
The February 2025 prohibitions are already in force. If you haven’t reviewed your AI portfolio against the banned practices list, that’s your first priority.
The Four Risk Tiers
The EU AI Act classifies AI systems into four tiers. Your compliance obligations depend entirely on which tier your system falls into.
1. Unacceptable Risk (Prohibited)
These AI practices are banned outright under Article 5:
- Social scoring by public authorities
- Real-time remote biometric identification in public spaces (with narrow law enforcement exceptions)
- Exploitation of vulnerabilities of specific groups (age, disability, social situation)
- Subliminal manipulation causing harm
- Emotion recognition in workplaces and educational institutions (with narrow exceptions)
- Untargeted scraping of facial images for facial recognition databases
- Biometric categorisation inferring sensitive attributes (race, political opinions, sexual orientation)
Checklist: Review every AI system in your portfolio. If any system falls into these categories, decommission it immediately.
2. High Risk
High-risk AI systems carry the heaviest compliance burden. A system is high-risk if it falls under one of two paths:
- Path 1 (Article 6(1)): The AI system is a safety component of a product, or is itself a product, covered by the EU harmonised legislation listed in Annex I, AND requires a third-party conformity assessment.
- Path 2 (Article 6(2)): The AI system falls into one of the use-case categories listed in Annex III (e.g., biometrics, critical infrastructure, employment, credit scoring, law enforcement, migration, justice).
We cover the high-risk checklist in detail below.
3. Limited Risk (Transparency Obligations)
These systems must meet specific transparency requirements under Article 50:
- Chatbots and conversational AI: Must inform users they are interacting with an AI system
- Emotion recognition systems: Must inform subjects that the system is in operation
- Deep fakes / synthetic content: Must label content as artificially generated or manipulated
- AI-generated text published to inform the public: Must disclose AI generation
Checklist: Implement clear disclosure mechanisms. Audit user-facing interfaces for transparency compliance.
4. Minimal Risk
Most AI systems fall here (spam filters, AI-enabled video games, inventory management). No mandatory obligations, though voluntary codes of conduct are encouraged.
Classify your AI system in 2 minutes — Use the AISight classifier wizard to determine your risk tier instantly.
The High-Risk Compliance Checklist
This is where the real work lives. If your AI system is classified as high-risk, you must comply with a comprehensive set of requirements spanning Articles 9 through 27.
Requirements for Providers (Articles 9–15)
Risk Management System (Article 9)
- [ ] Establish a continuous, iterative risk management system throughout the AI system’s lifecycle
- [ ] Identify and analyse known and reasonably foreseeable risks to health, safety, and fundamental rights
- [ ] Implement risk mitigation measures and evaluate residual risk
- [ ] Test the system to identify the most appropriate risk management measures
- [ ] Document all risk assessments and mitigation decisions
Data and Data Governance (Article 10)
- [ ] Use training, validation, and testing datasets that are relevant, sufficiently representative, and as free of errors as possible
- [ ] Implement data governance practices covering data collection, preparation, labelling, and quality controls
- [ ] Address potential biases in datasets, especially where special categories of personal data are processed
- [ ] Document data provenance, characteristics, and any gaps or shortcomings
Technical Documentation (Article 11)
- [ ] Prepare technical documentation before the system is placed on the market
- [ ] Include a general description of the AI system, its intended purpose, and how it interacts with hardware and software
- [ ] Document the development process, design choices, and system architecture
- [ ] Describe monitoring, functioning, and control mechanisms
- [ ] Keep documentation up to date throughout the system’s lifecycle
Record-Keeping / Logging (Article 12)
- [ ] Build automatic logging capabilities into the AI system
- [ ] Ensure logs capture events relevant to identifying risks and substantial modifications
- [ ] Retain logs for an appropriate period commensurate with the system’s intended purpose
Transparency and Information to Deployers (Article 13)
- [ ] Design the system to be sufficiently transparent for deployers to interpret and use outputs appropriately
- [ ] Provide clear instructions for use, including the system’s capabilities, limitations, and intended purpose
- [ ] Communicate the level of accuracy, robustness, and cybersecurity the system was designed for
Human Oversight (Article 14)
- [ ] Design the system so it can be effectively overseen by natural persons during use
- [ ] Enable the human overseer to fully understand the system’s capabilities and limitations
- [ ] Provide the ability to override, interrupt, or stop the system
- [ ] Implement mechanisms to flag anomalies, dysfunctions, or unexpected performance
Accuracy, Robustness, and Cybersecurity (Article 15)
- [ ] Achieve and maintain appropriate levels of accuracy for the system’s intended purpose
- [ ] Build resilience against errors, faults, and inconsistencies
- [ ] Implement cybersecurity measures to protect against unauthorised access and adversarial attacks
- [ ] Document accuracy metrics and known limitations
Additional Provider Obligations (Articles 16–22)
| Obligation | Article | Description |
|---|---|---|
| Quality management system | Art. 17 | Implement a QMS covering all aspects of compliance |
| Technical documentation | Art. 18 | Maintain and update documentation per Article 11 |
| Conformity assessment | Art. 43 | Complete the appropriate conformity assessment before market placement |
| EU Declaration of Conformity | Art. 47 | Draw up a declaration of conformity for each high-risk AI system |
| CE marking | Art. 48 | Affix the CE marking to the AI system or its documentation |
| EU database registration | Art. 49 | Register the system in the EU database before market placement |
| Post-market monitoring | Art. 72 | Establish a post-market monitoring system proportionate to the AI system |
| Serious incident reporting | Art. 73 | Report serious incidents to market surveillance authorities without undue delay |
| Corrective actions | Art. 20 | Take corrective action if the system is not in conformity |
Requirements for Deployers (Articles 26–27)
If you deploy (use) a high-risk AI system rather than develop it, your obligations are lighter but still significant:
- [ ] Use the system in accordance with the provider’s instructions for use
- [ ] Assign human oversight to competent, trained, and authorised individuals
- [ ] Ensure input data is relevant and sufficiently representative for the system’s intended purpose
- [ ] Monitor the system’s operation and report malfunctions or serious incidents to the provider and/or authorities
- [ ] Conduct a fundamental rights impact assessment (FRIA) if you are a body governed by public law, a private entity providing public services, or deploying systems for creditworthiness or risk assessment in life/health insurance
- [ ] Keep logs automatically generated by the system for at least six months
- [ ] Inform workers’ representatives and affected employees when deploying high-risk AI in the workplace
Check your readiness in 5 minutes — Take the free EU AI Act assessment to see where your organization stands.
Conformity Assessment: What You Need to Know
Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment (Article 43). There are two routes:
- Internal conformity assessment (Annex VI): The provider self-assesses compliance. Available for most Annex III high-risk systems.
- Third-party conformity assessment (Annex VII): A notified body assesses compliance. Required for biometric identification and categorisation systems, and for systems covered by Annex I harmonised legislation that already requires third-party assessment.
After completing the assessment, you must draw up an EU Declaration of Conformity (Article 47), affix the CE marking (Article 48), and register the system in the EU database (Article 49).
EU Database Registration and Post-Market Monitoring
Article 49 requires providers and deployers of high-risk AI systems to register in the EU database before the system is placed on the market. The database is publicly accessible. This is not a one-time task — you must update the registration whenever there are substantial modifications.
Article 72 requires providers to establish a post-market monitoring system that actively and systematically collects data on performance throughout the system’s lifecycle, evaluates continuous compliance, and feeds into the risk management system for ongoing updates.
Key Deadlines Summary
| Deadline | Action Required |
|---|---|
| Now (since Feb 2025) | Ensure no prohibited AI practices are in use |
| 2 August 2025 | GPAI model providers must comply with Chapter V obligations |
| 2 August 2026 | Full compliance for high-risk AI systems under Annex III |
| 2 August 2027 | Full compliance for high-risk AI embedded in Annex I products |
| Ongoing | Post-market monitoring, incident reporting, database registration updates |
Penalties for Non-Compliance
The EU AI Act imposes significant fines:
- €35 million or 7% of global annual turnover (whichever is higher) for prohibited AI practices
- €15 million or 3% of global annual turnover for violations of high-risk requirements
- €7.5 million or 1.5% of global annual turnover for supplying incorrect information to authorities
For SMEs and startups, fines are capped at the lower of the two amounts (percentage or fixed sum), providing some proportionality — but the financial exposure is still substantial.
Your Action Plan
- Inventory all AI systems in your organization — including third-party tools and embedded AI components
- Classify each system by risk tier using the Article 6 framework
- Prioritise high-risk systems and begin implementing Articles 9–15 requirements
- Establish a quality management system (Article 17) if you don’t already have one
- Prepare technical documentation and logging infrastructure
- Plan your conformity assessment — determine whether you need internal or third-party assessment
- Register in the EU database before placing systems on the market
- Set up post-market monitoring and incident reporting processes
- Train your teams — human oversight requires competent, informed personnel
- Document everything — the Act rewards organisations that can demonstrate systematic compliance efforts
Start your compliance journey today — The AISight assessment tool walks you through every requirement and gives you a personalised compliance roadmap. It takes less than 5 minutes.
Conclusion
The EU AI Act is not a distant regulatory threat — it’s here, and the clock is ticking. The organisations that start now will have a competitive advantage: they’ll be able to demonstrate trustworthiness to customers, partners, and regulators while their competitors scramble to catch up.
Compliance doesn’t have to be overwhelming. Break it into manageable steps, classify your systems accurately, and focus your resources on the highest-risk areas first. The checklist above gives you a clear path forward.
The question isn’t whether you need to comply. It’s whether you’ll be ready when enforcement begins.