EU AI Act Penalties — What Happens If You're Not Compliant
EU AI Act Penalties — What Happens If You’re Not Compliant
The EU AI Act doesn’t just set rules — it backs them with some of the largest regulatory fines in the world. At up to €35 million or 7% of global annual turnover, the penalties dwarf even the GDPR’s maximum fines. For mid-market companies, a single enforcement action could be existential.
Yet many compliance professionals still treat the penalty framework as an abstract threat. That’s a mistake. Enforcement infrastructure is being built right now, national authorities are being designated, and the first prohibition deadlines have already passed. This guide breaks down exactly how penalties work, what triggers them, how they’re calculated, and what you should be doing today to avoid them.
The Three-Tier Penalty Structure
The EU AI Act establishes three tiers of administrative fines under Article 99, each tied to the severity of the violation. The structure is designed to be proportionate but punitive — the more fundamental the violation, the higher the fine.
Tier 1: Prohibited AI Practices — €35 Million or 7% of Global Turnover
The highest penalties are reserved for violations of Article 5 — the outright bans on certain AI practices. If your organisation develops, deploys, or makes available an AI system that engages in a prohibited practice, you face fines of up to:
- €35 million, or
- 7% of total worldwide annual turnover for the preceding financial year
Whichever amount is higher applies.
What triggers Tier 1 penalties:
- Operating a social scoring system
- Deploying real-time remote biometric identification in publicly accessible spaces (outside the narrow law enforcement exceptions)
- Using AI to exploit vulnerabilities of specific groups based on age, disability, or social or economic situation
- Deploying subliminal manipulation techniques that cause or are likely to cause harm
- Using emotion recognition in workplaces or educational institutions (outside narrow exceptions)
- Untargeted scraping of facial images from the internet or CCTV footage to build facial recognition databases
- Using biometric categorisation to infer sensitive attributes such as race, political opinions, trade union membership, religious beliefs, or sexual orientation
These prohibitions have been in force since 2 February 2025. If you haven’t audited your AI portfolio against this list, you’re already at risk.
Tier 2: Other Violations — €15 Million or 3% of Global Turnover
The second tier covers non-compliance with most other obligations under the Act, including the high-risk requirements that take effect in August 2026. Fines reach up to:
- €15 million, or
- 3% of total worldwide annual turnover for the preceding financial year
What triggers Tier 2 penalties:
- Failing to comply with high-risk AI system requirements (Articles 9–15): risk management, data governance, technical documentation, logging, transparency, human oversight, accuracy/robustness/cybersecurity
- Failing to meet provider obligations (Articles 16–22): quality management systems, conformity assessments, CE marking, EU database registration
- Failing to meet deployer obligations (Article 26): improper use, inadequate human oversight, failure to monitor or report incidents
- Non-compliance with GPAI model obligations (Articles 51–56)
- Failing to meet transparency obligations (Article 50)
- Non-compliance with obligations for notified bodies
This is the tier that will affect the most mid-market companies. If you deploy AI-powered HR tools, credit scoring systems, or any Annex III use case without meeting the high-risk requirements, Tier 2 penalties apply.
Tier 3: Incorrect Information — €7.5 Million or 1.5% of Global Turnover
The third tier targets a specific but important violation: providing incorrect, incomplete, or misleading information to national competent authorities or notified bodies. Fines reach up to:
- €7.5 million, or
- 1.5% of total worldwide annual turnover for the preceding financial year
What triggers Tier 3 penalties:
- Supplying false or misleading information in response to a request from a market surveillance authority
- Providing incorrect information during a conformity assessment
- Misrepresenting the risk classification of an AI system
- Failing to disclose known deficiencies or incidents when required
This tier is particularly relevant during audits and investigations. If an authority asks about your AI system and you provide inaccurate information — whether intentionally or through negligence — you’re exposed.
Check your readiness in 5 minutes — Take the free EU AI Act assessment to see where your organization stands.
Who Enforces the EU AI Act?
Enforcement is split between national authorities and the EU-level AI Office.
National Market Surveillance Authorities
Each EU member state must designate at least one national competent authority to serve as the market surveillance authority for AI. These authorities are responsible for:
- Monitoring compliance within their jurisdiction
- Conducting investigations and audits
- Ordering corrective actions (including withdrawal of AI systems from the market)
- Imposing administrative fines
The specific authority varies by member state. Some countries are designating existing data protection authorities (leveraging GDPR enforcement experience), while others are creating new bodies or assigning the role to sector-specific regulators.
The EU AI Office
The AI Office, established within the European Commission, has a coordinating and direct enforcement role:
- Direct enforcement of GPAI model obligations — the AI Office is the sole enforcer for general-purpose AI models, with the power to impose fines directly
- Coordination across national authorities to ensure consistent enforcement
- Guidance and standards — developing codes of practice, harmonised standards, and implementation guidance
- Monitoring of systemic risks from GPAI models
For mid-market companies, your primary enforcement contact will be the national authority in the member state(s) where you operate or place AI systems on the market.
What Triggers Penalties — Real-World Scenarios
Abstract fine amounts are hard to internalise. Here are concrete scenarios that illustrate how penalties could materialise for mid-market companies.
Scenario 1: The Unaudited HR Tool
A mid-market SaaS company uses an AI-powered resume screening tool to filter job applicants. The tool falls squarely into Annex III, Category 4 (Employment). The company never classified the system, never implemented Articles 9–15 requirements, and has no conformity assessment or EU database registration.
A rejected job applicant files a complaint with the national authority. The authority investigates and finds systemic non-compliance. Potential penalty: up to €15 million or 3% of global turnover, plus an order to suspend use of the system until compliance is achieved.
Scenario 2: The Emotion Recognition System
A retail company deploys an AI system in its stores that analyses customer facial expressions to gauge satisfaction. The system performs emotion recognition — which, when used in certain contexts, may fall under the prohibited practices of Article 5 or at minimum triggers high-risk classification under Annex III, Category 1 (Biometrics).
If the system is found to violate Article 5 prohibitions, the company faces Tier 1 penalties: up to €35 million or 7% of global turnover. If it’s classified as high-risk but the company hasn’t met the requirements, Tier 2 penalties apply: up to €15 million or 3%.
Scenario 3: The Misleading Audit Response
A fintech company uses an AI credit scoring model. During a market surveillance audit, the company claims the model only performs a “narrow procedural task” (Article 6(3) exception) to avoid high-risk classification. The authority determines this characterisation is misleading — the model materially influences credit decisions.
The company faces Tier 3 penalties for incorrect information (up to €7.5 million or 1.5%) on top of Tier 2 penalties for non-compliance with high-risk requirements (up to €15 million or 3%).
Scenario 4: The Third-Party Tool Defence
A company deploys a third-party AI system for workforce scheduling that assigns shifts based on productivity metrics. The company assumes compliance is the vendor’s responsibility. The national authority disagrees — deployers have independent obligations under Article 26, including human oversight, monitoring, and incident reporting.
Deployers cannot outsource their compliance obligations. The company faces Tier 2 penalties for its own failures, regardless of the provider’s compliance status.
Don’t wait for an audit — Use the AISight classifier wizard to classify your AI systems and identify compliance gaps before a regulator does.
How Penalties Are Calculated — Proportionality Factors
The EU AI Act doesn’t mandate automatic maximum fines. Article 99(7) requires national authorities to consider several proportionality factors when determining the actual fine amount:
- Nature, gravity, and duration of the infringement
- Intentional or negligent character of the infringement
- Actions taken to mitigate the damage suffered by affected persons
- Degree of responsibility, taking into account technical and organisational measures implemented
- Previous infringements by the same operator
- Degree of cooperation with the supervisory authority
- Manner in which the infringement became known to the authority (self-reporting vs. complaint vs. investigation)
- Size and market share of the operator
- Any other aggravating or mitigating factors applicable to the circumstances
This means that demonstrating good-faith compliance efforts — even if imperfect — can significantly reduce your fine. Conversely, ignoring the regulation entirely and then being uncooperative during an investigation will push penalties toward the maximum.
SME and Startup Provisions
The EU AI Act includes specific provisions to reduce the burden on small and medium-sized enterprises:
- Proportionate fines: For SMEs (including startups), the fine is the lower of the fixed amount or the percentage of turnover. For large enterprises, it’s the higher amount. This is a meaningful difference — a company with €50 million in turnover faces a maximum Tier 1 fine of €3.5 million (7% of turnover) rather than €35 million.
- Regulatory sandboxes: Member states must establish AI regulatory sandboxes that provide a controlled environment for SMEs to develop and test AI systems before market placement.
- Reduced fees: Conformity assessment fees and other regulatory costs should be reduced proportionately for SMEs.
- Priority access to guidance: The AI Office and national authorities are directed to provide guidance and support tailored to SMEs.
However, SME status does not exempt you from compliance. The obligations are the same — only the penalty calculation and support mechanisms differ.
Comparison to GDPR Fines
The EU AI Act’s penalty structure is deliberately modelled on — and exceeds — the GDPR framework. Here’s how they compare:
| Factor | GDPR | EU AI Act |
|---|---|---|
| Maximum fine (highest tier) | €20M or 4% of global turnover | €35M or 7% of global turnover |
| Maximum fine (second tier) | €10M or 2% of global turnover | €15M or 3% of global turnover |
| Enforcement bodies | National DPAs | National market surveillance authorities + AI Office |
| Extraterritorial reach | Yes | Yes |
| Private right of action | Yes (varies by member state) | Yes — affected persons can lodge complaints and seek remedies |
| Track record | Established enforcement since 2018 | Enforcement beginning 2025 |
The GDPR’s enforcement history is instructive. Early enforcement was slow, but fines have escalated significantly over time. The EU AI Act’s higher ceilings and broader scope suggest that enforcement will follow a similar pattern of increasing rigour as authorities build capacity and precedent.
The key lesson from GDPR: regulators prioritise cases that send a message. High-profile non-compliance, consumer complaints, and cross-border violations attract the most attention. Mid-market companies that assume they’re “too small to notice” are often the ones that get made into examples.
What to Do Now
The penalty framework is clear, the deadlines are set, and enforcement infrastructure is being built. Here’s your action plan:
1. Audit your AI portfolio immediately. Identify every AI system your organisation provides, deploys, or distributes. Include third-party tools — deployer obligations apply regardless of who built the system.
2. Classify every system. Use the Article 6 framework to determine which risk tier each system falls into. Document your reasoning. Classification errors are themselves a compliance risk.
3. Prioritise prohibited practices. Article 5 prohibitions are already enforceable. If any system in your portfolio could be construed as a prohibited practice, address it now.
4. Build your compliance programme for high-risk systems. If you have high-risk systems, begin implementing Articles 9–15 requirements: risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness/cybersecurity.
5. Establish incident reporting processes. Serious incident reporting (Article 73) requires prompt notification to authorities. Don’t wait for an incident to figure out your reporting process.
6. Document everything. The proportionality factors in Article 99(7) reward organisations that can demonstrate systematic compliance efforts. Documentation is your best defence in an enforcement action.
7. Engage with your vendors. If you deploy third-party AI systems, understand what compliance obligations the provider has met and what falls to you as the deployer. Get this in writing.
8. Monitor regulatory developments. National authorities are still being designated, harmonised standards are being developed, and the Commission can amend the Annex III categories via delegated acts. Stay current.
Start your compliance journey today — The AISight assessment tool gives you a personalised compliance roadmap in under 5 minutes. Know your risk before a regulator tells you.
Conclusion
The EU AI Act’s penalty framework is not designed to be punitive for its own sake — it’s designed to ensure compliance. The proportionality factors, SME provisions, and phased timeline all reflect a regulatory approach that rewards good-faith effort and punishes wilful neglect.
But make no mistake: the fines are real, the enforcement mechanisms are being built, and the first deadlines have already passed. The organisations that invest in compliance now will be the ones that avoid enforcement actions later — and they’ll have a competitive advantage in a market that increasingly values trustworthy AI.
The cost of compliance is predictable and manageable. The cost of non-compliance is not. Choose wisely.