Is Your AI System High-Risk? How to Classify Under the EU AI Act
Is Your AI System High-Risk? How to Classify Under the EU AI Act
Risk classification is the single most consequential decision you’ll make under the EU AI Act. Get it right, and you have a clear compliance roadmap. Get it wrong, and you face penalties of up to €15 million or 3% of your global annual turnover — plus the operational chaos of retroactive compliance.
For compliance professionals at mid-market companies, the classification process can feel opaque. The regulation’s language is dense, the categories are broad, and the stakes are high. This guide breaks down exactly how classification works, walks you through every Annex III category with concrete examples, explains the exceptions that could save you significant compliance costs, and gives you a step-by-step process to classify your systems with confidence.
How Article 6 Classification Works
Article 6 of the EU AI Act defines two distinct paths through which an AI system can be classified as high-risk. Understanding both paths is essential because they operate independently — your system only needs to meet the criteria of one path to trigger the full set of high-risk obligations.
Path 1: Safety Components Under Annex I (Article 6(1))
An AI system is high-risk if both of the following conditions are met:
- The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the EU harmonised legislation listed in Annex I
- The product or safety component requires a third-party conformity assessment under that harmonised legislation before being placed on the market
Annex I covers a wide range of EU product safety legislation, including the Machinery Regulation, Toy Safety Directive, Lifts Directive, Radio Equipment Directive, Medical Devices Regulation, In Vitro Diagnostic Medical Devices Regulation, Civil Aviation Regulation, Motor Vehicle Type-Approval Regulation, and several others.
Practical example: You develop an AI system that controls the braking logic in an autonomous vehicle component. The vehicle falls under the Motor Vehicle Type-Approval Regulation (Annex I), and the braking component requires third-party conformity assessment. Your AI system is high-risk under Path 1.
Important note: The full obligations for Path 1 high-risk systems don’t take effect until 2 August 2027, giving product manufacturers an additional year compared to Path 2 systems.
Path 2: Annex III Use Cases (Article 6(2))
An AI system is high-risk if it falls into one of the use-case categories listed in Annex III. This is the path that catches most mid-market companies off guard, because the categories are broader than many expect.
Full obligations for Path 2 systems take effect on 2 August 2026.
Not sure which path applies to you? — Use the AISight classifier wizard to walk through the classification logic step by step.
The 8 Annex III Categories — With Examples
Annex III defines eight categories of high-risk AI use cases. Here’s each category, what it covers, and real-world examples that mid-market companies commonly encounter.
1. Biometrics
What it covers: Remote biometric identification systems (not real-time in public spaces — that’s prohibited), biometric categorisation systems that infer sensitive or protected attributes, and emotion recognition systems.
Examples:
- An access control system that uses facial recognition to verify employee identity
- A customer analytics tool that infers age, gender, or ethnicity from camera feeds
- A call centre AI that analyses voice patterns to detect customer emotions
Watch out: Many off-the-shelf security and analytics tools include biometric capabilities that trigger this category, even if biometrics isn’t the primary function.
2. Critical Infrastructure
What it covers: AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.
Examples:
- An AI system that manages load balancing in an electrical grid
- A predictive maintenance system for water treatment facilities
- An AI traffic management system that controls signal timing at intersections
- A network monitoring AI that manages routing in critical telecommunications infrastructure
3. Education and Vocational Training
What it covers: AI systems used to determine access to or admission to educational institutions, evaluate learning outcomes (including systems that steer the learning process), and monitor prohibited behaviour during exams.
Examples:
- An AI-powered admissions screening tool that ranks university applicants
- An adaptive learning platform that determines which courses or content a student can access
- An automated essay grading system used for official assessments
- An AI proctoring system that monitors students during online exams
4. Employment, Workers Management, and Access to Self-Employment
What it covers: AI systems used for recruitment and selection, decisions affecting terms of work relationships (promotion, termination), task allocation based on individual behaviour or personal traits, and monitoring and evaluating performance and behaviour of workers.
Examples:
- An AI resume screening tool that filters job applicants
- A performance management system that uses AI to recommend promotions or flag underperformers
- A workforce scheduling AI that assigns shifts based on productivity metrics
- An AI tool that monitors employee keystrokes, screen activity, or communication patterns
This is the category that catches the most mid-market companies. If you use any AI-powered HR tech — recruiting platforms, performance tools, workforce analytics — there’s a strong chance it falls here.
Check if your HR tools are high-risk — Run the AISight classifier against your HR and workforce AI systems.
5. Access to Essential Private and Public Services
What it covers: AI systems used to evaluate creditworthiness or establish credit scores (except fraud detection), risk assessment and pricing in life and health insurance, evaluation and classification of emergency calls, and assessment of eligibility for public assistance benefits.
Examples:
- A credit scoring model used by a fintech lender to approve or deny loan applications
- An AI underwriting system that prices health insurance policies based on individual risk profiles
- A triage AI in a 911 call centre that prioritises emergency responses
- A government benefits platform that uses AI to determine eligibility for housing assistance
6. Law Enforcement
What it covers: AI systems used as polygraphs or to detect emotional states, assess the reliability of evidence, assess the risk of offending or reoffending, profiling in the course of detection or investigation of criminal offences, and crime analytics regarding natural persons.
Examples:
- A predictive policing system that identifies areas or individuals at higher risk of criminal activity
- An AI tool that assesses the credibility of witness statements
- A recidivism risk assessment tool used in sentencing or parole decisions
Most mid-market companies won’t encounter this category unless they sell to law enforcement agencies — but if you do, the compliance burden is significant.
7. Migration, Asylum, and Border Control Management
What it covers: AI systems used as polygraphs or to detect emotional states during border interviews, assess risks posed by individuals entering or applying for visas, assist in the examination of applications for asylum or residence permits, and identification of individuals in the context of migration.
Examples:
- An AI document verification system used by immigration authorities to assess visa applications
- A risk scoring system that flags travellers for additional screening at border crossings
8. Administration of Justice and Democratic Processes
What it covers: AI systems used to assist judicial authorities in researching and interpreting facts and the law, and AI systems used to influence the outcome of elections or referendums or the voting behaviour of natural persons.
Examples:
- An AI legal research tool used by courts to assist judges in case analysis
- An AI system that generates personalised political messaging to influence voter behaviour
The Article 6(3) Exceptions — When High-Risk Doesn’t Apply
This is where many compliance professionals breathe a sigh of relief. Article 6(3) provides three exceptions that can remove an Annex III system from high-risk classification. An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making.
Specifically, the exception applies if the AI system is intended to perform one of the following:
Exception 1: Narrow Procedural Task
The AI system performs a narrow procedural task — for example, transforming unstructured data into structured data, classifying incoming documents into categories, or detecting duplicates among documents.
Example: An AI tool that automatically tags and categorises incoming support tickets by topic. It doesn’t make decisions about the tickets — it just organises them for human review.
Exception 2: Improving the Result of a Previously Completed Human Activity
The AI system is intended to improve the result of a previously completed human activity — essentially a quality-check or polish step.
Example: An AI grammar and style checker that reviews a human-written performance evaluation before it’s sent. The human made the substantive decisions; the AI just improves the writing.
Exception 3: Preparatory Task for an Assessment
The AI system detects decision-making patterns or deviations from prior patterns and is not meant to replace or influence the previously completed human assessment without proper human review.
Example: An AI analytics dashboard that flags anomalies in loan application patterns for a human credit analyst to review. The AI surfaces patterns; the human makes the credit decision.
Critical caveat: Even if one of these exceptions applies, the provider must document why the system is not high-risk. And the exception does not apply if the AI system performs profiling of natural persons as defined in Article 4(4) of the GDPR. If your system profiles individuals, the exceptions cannot save you.
Unsure if an exception applies to your system? — The AISight classifier wizard walks you through the exception analysis with guided questions.
What Happens If You Classify Wrong
Misclassification is not a grey area under the EU AI Act. Article 99 sets out the penalty framework, and incorrect classification can trigger enforcement in two directions.
Classifying Too Low (Under-Classification)
If you classify a high-risk system as limited or minimal risk, you skip the mandatory requirements of Articles 9–15 — risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness/cybersecurity. When a national market surveillance authority audits your system and determines it should have been classified as high-risk, you face:
- Fines of up to €15 million or 3% of global annual turnover for non-compliance with high-risk requirements
- Mandatory corrective action, which may include withdrawing the system from the market
- Reputational damage from public enforcement actions
Classifying Too High (Over-Classification)
While there’s no direct penalty for over-classifying, the cost is operational. Complying with high-risk requirements when you don’t need to means unnecessary investment in documentation, conformity assessments, quality management systems, and ongoing monitoring. For a mid-market company, this can mean hundreds of thousands of euros in avoidable compliance costs.
Supplying Incorrect Information
If you provide incorrect, incomplete, or misleading information to national authorities or notified bodies — including information about your system’s risk classification — you face fines of up to €7.5 million or 1.5% of global annual turnover.
Step-by-Step Classification Process
Here’s a practical process to classify your AI systems:
Step 1: Inventory your AI systems. List every AI system your organisation provides, deploys, imports, or distributes. Include third-party tools and embedded AI components.
Step 2: Check against prohibited practices (Article 5). If any system falls into the unacceptable-risk category, stop. Decommission it.
Step 3: Check Path 1 — Annex I safety components. Is the AI system a safety component of a product covered by Annex I harmonised legislation? Does that product require third-party conformity assessment? If yes to both, it’s high-risk under Path 1.
Step 4: Check Path 2 — Annex III use cases. Does the AI system fall into any of the eight Annex III categories? Review each category carefully against your system’s intended purpose and actual use.
Step 5: Evaluate Article 6(3) exceptions. If the system falls under Annex III, does it perform only a narrow procedural task, improve a prior human activity, or serve as a preparatory step for human assessment? Does it profile natural persons? Document your analysis.
Step 6: Check transparency obligations (Article 50). Even if the system isn’t high-risk, does it interact directly with natural persons, generate synthetic content, or perform emotion recognition? If so, transparency obligations apply.
Step 7: Document your classification decision. Regardless of the outcome, document the reasoning behind your classification. This is your first line of defence in an audit.
Step 8: Reassess regularly. Classification isn’t a one-time exercise. Substantial modifications to the system, changes in intended purpose, or updates to the Annex III categories (which the Commission can amend via delegated acts) can change your classification.
Classify your systems now — The AISight classifier wizard automates Steps 2–6 and generates a documented classification report you can use for audit readiness.
Why Classification Matters for Mid-Market Companies
Large enterprises have dedicated regulatory affairs teams and outside counsel to navigate classification. Mid-market companies typically don’t. That’s precisely why getting classification right early is so important — it determines the scope and cost of your entire compliance programme.
Classify accurately, and you invest your compliance budget where it matters. Classify carelessly, and you either overspend on unnecessary requirements or expose your organisation to enforcement risk.
The EU AI Act gives you the framework. The classification decision is yours to make — but it needs to be informed, documented, and defensible.
Start classifying today — Take the free EU AI Act assessment to understand your full compliance picture, or jump straight to the classifier wizard to classify a specific system.
Conclusion
Classification under the EU AI Act isn’t a checkbox exercise — it’s a strategic decision that shapes your compliance obligations, your budget, and your risk exposure. The two-path structure of Article 6, the breadth of Annex III, and the nuance of the Article 6(3) exceptions mean that every AI system deserves careful, documented analysis.
Don’t wait for a regulator to tell you how your system should be classified. Do the work now, document your reasoning, and build your compliance programme on a solid foundation. The August 2026 deadline for high-risk obligations is closer than it appears.