Free Assessment →

EU AI Act Compliance Checklist 2026 — The Complete Guide

·9 min read

EU AI Act Compliance Checklist 2026 — The Complete Guide

The EU AI Act (Regulation 2024/1689) is the world’s first comprehensive legal framework for artificial intelligence. If your organization develops, deploys, or distributes AI systems that touch the European market, compliance is no longer optional — it’s the law.

With the high-risk obligations taking full effect in August 2026, mid-market companies face a narrow window to get their house in order. This guide provides a practical, step-by-step compliance checklist so you know exactly what to do, when to do it, and what’s at stake if you don’t.

What Is the EU AI Act?

The EU AI Act is a regulation — not a directive — meaning it applies directly across all 27 EU member states without requiring national transposition. It regulates AI systems based on the risk they pose to health, safety, and fundamental rights.

The Act applies to:

If your AI system’s output is used in the EU — even if your company is headquartered in San Francisco, Singapore, or São Paulo — the Act applies to you.

Not sure if the EU AI Act applies to you? — Take the free EU AI Act assessment and find out in under 5 minutes.

The Enforcement Timeline

The EU AI Act entered into force on 1 August 2024, but obligations phase in over a staggered timeline:

Date What Takes Effect
2 February 2025 Prohibitions on unacceptable-risk AI practices (Article 5)
2 August 2025 Obligations for general-purpose AI (GPAI) models (Chapter V)
2 August 2026 Full obligations for high-risk AI systems (Chapter III, Articles 6–51)
2 August 2027 Obligations for high-risk AI systems embedded in products covered by Annex I EU harmonised legislation

The February 2025 prohibitions are already in force. If you haven’t reviewed your AI portfolio against the banned practices list, that’s your first priority.

The Four Risk Tiers

The EU AI Act classifies AI systems into four tiers. Your compliance obligations depend entirely on which tier your system falls into.

1. Unacceptable Risk (Prohibited)

These AI practices are banned outright under Article 5:

Checklist: Review every AI system in your portfolio. If any system falls into these categories, decommission it immediately.

2. High Risk

High-risk AI systems carry the heaviest compliance burden. A system is high-risk if it falls under one of two paths:

We cover the high-risk checklist in detail below.

3. Limited Risk (Transparency Obligations)

These systems must meet specific transparency requirements under Article 50:

Checklist: Implement clear disclosure mechanisms. Audit user-facing interfaces for transparency compliance.

4. Minimal Risk

Most AI systems fall here (spam filters, AI-enabled video games, inventory management). No mandatory obligations, though voluntary codes of conduct are encouraged.

Classify your AI system in 2 minutes — Use the AISight classifier wizard to determine your risk tier instantly.

The High-Risk Compliance Checklist

This is where the real work lives. If your AI system is classified as high-risk, you must comply with a comprehensive set of requirements spanning Articles 9 through 27.

Requirements for Providers (Articles 9–15)

Risk Management System (Article 9)

Data and Data Governance (Article 10)

Technical Documentation (Article 11)

Record-Keeping / Logging (Article 12)

Transparency and Information to Deployers (Article 13)

Human Oversight (Article 14)

Accuracy, Robustness, and Cybersecurity (Article 15)

Additional Provider Obligations (Articles 16–22)

Obligation Article Description
Quality management system Art. 17 Implement a QMS covering all aspects of compliance
Technical documentation Art. 18 Maintain and update documentation per Article 11
Conformity assessment Art. 43 Complete the appropriate conformity assessment before market placement
EU Declaration of Conformity Art. 47 Draw up a declaration of conformity for each high-risk AI system
CE marking Art. 48 Affix the CE marking to the AI system or its documentation
EU database registration Art. 49 Register the system in the EU database before market placement
Post-market monitoring Art. 72 Establish a post-market monitoring system proportionate to the AI system
Serious incident reporting Art. 73 Report serious incidents to market surveillance authorities without undue delay
Corrective actions Art. 20 Take corrective action if the system is not in conformity

Requirements for Deployers (Articles 26–27)

If you deploy (use) a high-risk AI system rather than develop it, your obligations are lighter but still significant:

Check your readiness in 5 minutes — Take the free EU AI Act assessment to see where your organization stands.

Conformity Assessment: What You Need to Know

Before placing a high-risk AI system on the EU market, providers must complete a conformity assessment (Article 43). There are two routes:

  1. Internal conformity assessment (Annex VI): The provider self-assesses compliance. Available for most Annex III high-risk systems.
  2. Third-party conformity assessment (Annex VII): A notified body assesses compliance. Required for biometric identification and categorisation systems, and for systems covered by Annex I harmonised legislation that already requires third-party assessment.

After completing the assessment, you must draw up an EU Declaration of Conformity (Article 47), affix the CE marking (Article 48), and register the system in the EU database (Article 49).

EU Database Registration and Post-Market Monitoring

Article 49 requires providers and deployers of high-risk AI systems to register in the EU database before the system is placed on the market. The database is publicly accessible. This is not a one-time task — you must update the registration whenever there are substantial modifications.

Article 72 requires providers to establish a post-market monitoring system that actively and systematically collects data on performance throughout the system’s lifecycle, evaluates continuous compliance, and feeds into the risk management system for ongoing updates.

Key Deadlines Summary

Deadline Action Required
Now (since Feb 2025) Ensure no prohibited AI practices are in use
2 August 2025 GPAI model providers must comply with Chapter V obligations
2 August 2026 Full compliance for high-risk AI systems under Annex III
2 August 2027 Full compliance for high-risk AI embedded in Annex I products
Ongoing Post-market monitoring, incident reporting, database registration updates

Penalties for Non-Compliance

The EU AI Act imposes significant fines:

For SMEs and startups, fines are capped at the lower of the two amounts (percentage or fixed sum), providing some proportionality — but the financial exposure is still substantial.

Your Action Plan

  1. Inventory all AI systems in your organization — including third-party tools and embedded AI components
  2. Classify each system by risk tier using the Article 6 framework
  3. Prioritise high-risk systems and begin implementing Articles 9–15 requirements
  4. Establish a quality management system (Article 17) if you don’t already have one
  5. Prepare technical documentation and logging infrastructure
  6. Plan your conformity assessment — determine whether you need internal or third-party assessment
  7. Register in the EU database before placing systems on the market
  8. Set up post-market monitoring and incident reporting processes
  9. Train your teams — human oversight requires competent, informed personnel
  10. Document everything — the Act rewards organisations that can demonstrate systematic compliance efforts

Start your compliance journey today — The AISight assessment tool walks you through every requirement and gives you a personalised compliance roadmap. It takes less than 5 minutes.

Conclusion

The EU AI Act is not a distant regulatory threat — it’s here, and the clock is ticking. The organisations that start now will have a competitive advantage: they’ll be able to demonstrate trustworthiness to customers, partners, and regulators while their competitors scramble to catch up.

Compliance doesn’t have to be overwhelming. Break it into manageable steps, classify your systems accurately, and focus your resources on the highest-risk areas first. The checklist above gives you a clear path forward.

The question isn’t whether you need to comply. It’s whether you’ll be ready when enforcement begins.