Free Assessment →

Is Your AI System High-Risk? How to Classify Under the EU AI Act

·10 min read

Is Your AI System High-Risk? How to Classify Under the EU AI Act

Risk classification is the single most consequential decision you’ll make under the EU AI Act. Get it right, and you have a clear compliance roadmap. Get it wrong, and you face penalties of up to €15 million or 3% of your global annual turnover — plus the operational chaos of retroactive compliance.

For compliance professionals at mid-market companies, the classification process can feel opaque. The regulation’s language is dense, the categories are broad, and the stakes are high. This guide breaks down exactly how classification works, walks you through every Annex III category with concrete examples, explains the exceptions that could save you significant compliance costs, and gives you a step-by-step process to classify your systems with confidence.

How Article 6 Classification Works

Article 6 of the EU AI Act defines two distinct paths through which an AI system can be classified as high-risk. Understanding both paths is essential because they operate independently — your system only needs to meet the criteria of one path to trigger the full set of high-risk obligations.

Path 1: Safety Components Under Annex I (Article 6(1))

An AI system is high-risk if both of the following conditions are met:

  1. The AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the EU harmonised legislation listed in Annex I
  2. The product or safety component requires a third-party conformity assessment under that harmonised legislation before being placed on the market

Annex I covers a wide range of EU product safety legislation, including the Machinery Regulation, Toy Safety Directive, Lifts Directive, Radio Equipment Directive, Medical Devices Regulation, In Vitro Diagnostic Medical Devices Regulation, Civil Aviation Regulation, Motor Vehicle Type-Approval Regulation, and several others.

Practical example: You develop an AI system that controls the braking logic in an autonomous vehicle component. The vehicle falls under the Motor Vehicle Type-Approval Regulation (Annex I), and the braking component requires third-party conformity assessment. Your AI system is high-risk under Path 1.

Important note: The full obligations for Path 1 high-risk systems don’t take effect until 2 August 2027, giving product manufacturers an additional year compared to Path 2 systems.

Path 2: Annex III Use Cases (Article 6(2))

An AI system is high-risk if it falls into one of the use-case categories listed in Annex III. This is the path that catches most mid-market companies off guard, because the categories are broader than many expect.

Full obligations for Path 2 systems take effect on 2 August 2026.

Not sure which path applies to you? — Use the AISight classifier wizard to walk through the classification logic step by step.

The 8 Annex III Categories — With Examples

Annex III defines eight categories of high-risk AI use cases. Here’s each category, what it covers, and real-world examples that mid-market companies commonly encounter.

1. Biometrics

What it covers: Remote biometric identification systems (not real-time in public spaces — that’s prohibited), biometric categorisation systems that infer sensitive or protected attributes, and emotion recognition systems.

Examples:

Watch out: Many off-the-shelf security and analytics tools include biometric capabilities that trigger this category, even if biometrics isn’t the primary function.

2. Critical Infrastructure

What it covers: AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, and electricity.

Examples:

3. Education and Vocational Training

What it covers: AI systems used to determine access to or admission to educational institutions, evaluate learning outcomes (including systems that steer the learning process), and monitor prohibited behaviour during exams.

Examples:

4. Employment, Workers Management, and Access to Self-Employment

What it covers: AI systems used for recruitment and selection, decisions affecting terms of work relationships (promotion, termination), task allocation based on individual behaviour or personal traits, and monitoring and evaluating performance and behaviour of workers.

Examples:

This is the category that catches the most mid-market companies. If you use any AI-powered HR tech — recruiting platforms, performance tools, workforce analytics — there’s a strong chance it falls here.

Check if your HR tools are high-risk — Run the AISight classifier against your HR and workforce AI systems.

5. Access to Essential Private and Public Services

What it covers: AI systems used to evaluate creditworthiness or establish credit scores (except fraud detection), risk assessment and pricing in life and health insurance, evaluation and classification of emergency calls, and assessment of eligibility for public assistance benefits.

Examples:

6. Law Enforcement

What it covers: AI systems used as polygraphs or to detect emotional states, assess the reliability of evidence, assess the risk of offending or reoffending, profiling in the course of detection or investigation of criminal offences, and crime analytics regarding natural persons.

Examples:

Most mid-market companies won’t encounter this category unless they sell to law enforcement agencies — but if you do, the compliance burden is significant.

7. Migration, Asylum, and Border Control Management

What it covers: AI systems used as polygraphs or to detect emotional states during border interviews, assess risks posed by individuals entering or applying for visas, assist in the examination of applications for asylum or residence permits, and identification of individuals in the context of migration.

Examples:

8. Administration of Justice and Democratic Processes

What it covers: AI systems used to assist judicial authorities in researching and interpreting facts and the law, and AI systems used to influence the outcome of elections or referendums or the voting behaviour of natural persons.

Examples:

The Article 6(3) Exceptions — When High-Risk Doesn’t Apply

This is where many compliance professionals breathe a sigh of relief. Article 6(3) provides three exceptions that can remove an Annex III system from high-risk classification. An AI system listed in Annex III is not considered high-risk if it does not pose a significant risk of harm to health, safety, or fundamental rights, including by not materially influencing the outcome of decision-making.

Specifically, the exception applies if the AI system is intended to perform one of the following:

Exception 1: Narrow Procedural Task

The AI system performs a narrow procedural task — for example, transforming unstructured data into structured data, classifying incoming documents into categories, or detecting duplicates among documents.

Example: An AI tool that automatically tags and categorises incoming support tickets by topic. It doesn’t make decisions about the tickets — it just organises them for human review.

Exception 2: Improving the Result of a Previously Completed Human Activity

The AI system is intended to improve the result of a previously completed human activity — essentially a quality-check or polish step.

Example: An AI grammar and style checker that reviews a human-written performance evaluation before it’s sent. The human made the substantive decisions; the AI just improves the writing.

Exception 3: Preparatory Task for an Assessment

The AI system detects decision-making patterns or deviations from prior patterns and is not meant to replace or influence the previously completed human assessment without proper human review.

Example: An AI analytics dashboard that flags anomalies in loan application patterns for a human credit analyst to review. The AI surfaces patterns; the human makes the credit decision.

Critical caveat: Even if one of these exceptions applies, the provider must document why the system is not high-risk. And the exception does not apply if the AI system performs profiling of natural persons as defined in Article 4(4) of the GDPR. If your system profiles individuals, the exceptions cannot save you.

Unsure if an exception applies to your system? — The AISight classifier wizard walks you through the exception analysis with guided questions.

What Happens If You Classify Wrong

Misclassification is not a grey area under the EU AI Act. Article 99 sets out the penalty framework, and incorrect classification can trigger enforcement in two directions.

Classifying Too Low (Under-Classification)

If you classify a high-risk system as limited or minimal risk, you skip the mandatory requirements of Articles 9–15 — risk management, data governance, technical documentation, logging, transparency, human oversight, and accuracy/robustness/cybersecurity. When a national market surveillance authority audits your system and determines it should have been classified as high-risk, you face:

Classifying Too High (Over-Classification)

While there’s no direct penalty for over-classifying, the cost is operational. Complying with high-risk requirements when you don’t need to means unnecessary investment in documentation, conformity assessments, quality management systems, and ongoing monitoring. For a mid-market company, this can mean hundreds of thousands of euros in avoidable compliance costs.

Supplying Incorrect Information

If you provide incorrect, incomplete, or misleading information to national authorities or notified bodies — including information about your system’s risk classification — you face fines of up to €7.5 million or 1.5% of global annual turnover.

Step-by-Step Classification Process

Here’s a practical process to classify your AI systems:

Step 1: Inventory your AI systems. List every AI system your organisation provides, deploys, imports, or distributes. Include third-party tools and embedded AI components.

Step 2: Check against prohibited practices (Article 5). If any system falls into the unacceptable-risk category, stop. Decommission it.

Step 3: Check Path 1 — Annex I safety components. Is the AI system a safety component of a product covered by Annex I harmonised legislation? Does that product require third-party conformity assessment? If yes to both, it’s high-risk under Path 1.

Step 4: Check Path 2 — Annex III use cases. Does the AI system fall into any of the eight Annex III categories? Review each category carefully against your system’s intended purpose and actual use.

Step 5: Evaluate Article 6(3) exceptions. If the system falls under Annex III, does it perform only a narrow procedural task, improve a prior human activity, or serve as a preparatory step for human assessment? Does it profile natural persons? Document your analysis.

Step 6: Check transparency obligations (Article 50). Even if the system isn’t high-risk, does it interact directly with natural persons, generate synthetic content, or perform emotion recognition? If so, transparency obligations apply.

Step 7: Document your classification decision. Regardless of the outcome, document the reasoning behind your classification. This is your first line of defence in an audit.

Step 8: Reassess regularly. Classification isn’t a one-time exercise. Substantial modifications to the system, changes in intended purpose, or updates to the Annex III categories (which the Commission can amend via delegated acts) can change your classification.

Classify your systems now — The AISight classifier wizard automates Steps 2–6 and generates a documented classification report you can use for audit readiness.

Why Classification Matters for Mid-Market Companies

Large enterprises have dedicated regulatory affairs teams and outside counsel to navigate classification. Mid-market companies typically don’t. That’s precisely why getting classification right early is so important — it determines the scope and cost of your entire compliance programme.

Classify accurately, and you invest your compliance budget where it matters. Classify carelessly, and you either overspend on unnecessary requirements or expose your organisation to enforcement risk.

The EU AI Act gives you the framework. The classification decision is yours to make — but it needs to be informed, documented, and defensible.

Start classifying today — Take the free EU AI Act assessment to understand your full compliance picture, or jump straight to the classifier wizard to classify a specific system.

Conclusion

Classification under the EU AI Act isn’t a checkbox exercise — it’s a strategic decision that shapes your compliance obligations, your budget, and your risk exposure. The two-path structure of Article 6, the breadth of Annex III, and the nuance of the Article 6(3) exceptions mean that every AI system deserves careful, documented analysis.

Don’t wait for a regulator to tell you how your system should be classified. Do the work now, document your reasoning, and build your compliance programme on a solid foundation. The August 2026 deadline for high-risk obligations is closer than it appears.