High-Risk AI Systems Under the EU AI Act: Full Guide to Definitions & Requirements

This is some text inside of a div block.
11 mins
September 4, 2025

Table of contents

The EU AI Act is the EU’s landmark law regulating artificial intelligence. It uses a risk-based approach to govern AI: some systems are banned (unacceptable risk), some are tightly regulated (high-risk), and most face minimal or no rules. For businesses developing or deploying AI, understanding what makes an AI system high-risk is vital because systems trigger extensive compliance requirements and heavy penalties for non-compliance. In this guide, we explain the EU AI Act high risk classification in detail, what it means, high-risk AI system examples, the obligations on providers and deployers, conformity assessment steps, exemptions, and how DPO Consulting can help you achieve compliance.

What Does “High-Risk AI” Mean Under the EU AI Act?

The EU AI Act defines “high-risk” AI systems in two complementary ways: Article 6 definition and Annex III criteria. Let’s understand them:

Legal Definition (Reference Article 6)

Article 6 of the AI Act spells out when an AI system is high-risk. Paraphrasing Article 6: an AI system is high-risk if it is a safety component of a product that already falls under EU sector rules (like the Machinery Directive, Medical Devices Regulation, etc.) and thus must undergo third-party conformity assessment. Likewise, if the AI system itself is a product subject to EU harmonisation (e.g., an AI-powered medical device) and that product requires third-party checks, then the AI system is high-risk.

Crucially, Article 6 also states that any system listed in Annex III is considered high-risk

The Act says all Annex III AI systems “shall be considered high-risk” unless they meet narrow exemption criteria (Article 6.3).  If a provider thinks an Annex III system really isn’t risky, they must document that and may have to notify regulators (and eventually register the system).

Criteria for High-Risk Classification (Annex III Categories)

Annex III is essentially an AI Act high-risk list by category. In practice, an AI system will be classified as high-risk if its intended use falls into any Annex III sector and it poses a significant risk. The listed sectors (contexts) in Annex III are:

  • Biometrics: AI for remote biometric ID or emotion recognition in public or sensitive contexts. (E.g., facial recognition by police.)

  • Critical Infrastructure: AI managing energy, transport, and utilities where failure could endanger lives.

  • Education/Vocational Training: AI deciding school admissions, exam scoring, or job training outcomes.

  • Employment/Worker Management: AI used for recruiting, CV screening, performance evaluation, etc.

  • Access to Essential Services: AI for social benefits, credit scoring, insurance, healthcare triage, etc.

  • Law Enforcement: AI for crime prediction, evidence evaluation, polygraph, lie detection, etc.

  • Migration and Border Control: AI for asylum, visa evaluation, and predicting migration risks.

  • Justice & Democracy: AI for legal decisions, jury assignment, or tools affecting elections.

These examples from Annex III have been described by regulators in various summaries. In short, if your AI is used in any of these sensitive areas (e.g., AI-powered hiring software, credit risk engine, security camera face-ID, robot-assisted surgery), it’s presumptively high-risk.

Because the AI Act’s categories overlap many sectors, businesses should carefully map their AI use cases against the EU AI Act risk categories. (The Act itself defines four EU AI Act risk levels – unacceptable, high, limited, minimal – which translate into obligations. High-risk is the second-highest tier.) 

Even if an AI isn’t explicitly listed, if it fits the spirit of these Annex III sectors and poses a big risk, regulators will treat it as EU AI Act high risk. Conversely, if an AI use is outside these categories (e.g., a chatbot on your website), it will likely fall into limited or minimal risk.

Examples of High-Risk AI Systems

To make this concrete, here are varied high-risk AI system examples across industries:

  • Automotive (Transportation): An AI module for autonomous driving or vehicle safety (e.g., collision avoidance) is high-risk. It’s a safety component under vehicle regulations.

  • Healthcare: AI that diagnoses diseases from medical images, recommends treatments, or controls medical devices (e.g., ventilators or surgical robots) is high-risk. Such AI qualifies as a medical device or a safety component of one.

  • Finance/Credit: Credit scoring AI and algorithmic lending decision tools are high-risk because they determine access to loans and essential services. Similarly, AI underwriting insurance that affects pricing could be high-risk under “access to essential services.”

  • Human Resources: Recruitment software that filters candidates, scores resumes, or automates hiring is EU AI Act high risk. It may lead to discrimination or unfair bias in employment decisions.

  • Education: AI tools that grade exams or decide student admissions are high-risk, since they directly affect educational opportunities and career paths.

  • Public Services: AI deciding eligibility for social benefits, immigration permits, or welfare programs is high-risk. For example, an AI that denies someone unemployment benefits would be Annex III.

  • Security & Law Enforcement: Facial recognition systems used for identifying people in public or predictive policing algorithms are high-risk. Even emotion recognition AI in airports or workplaces is restricted.

  • Infrastructure: AI controlling power grids, water systems, or traffic networks is high-risk (critical infra), because malfunctions could threaten public safety.

  • Justice/Democracy: AI that assists judges, forecasts crime sentencing, or targets political ads can be high-risk, as these affect fundamental rights and democratic processes.

These are just samples. Any AI with a significant public impact or safety role should be evaluated. When in doubt, consult the Annex III high-risk list and consider the real-world impact of your AI.

Obligations for High-Risk AI Providers & Deployers

If an AI system is classified as EU AI Act high risk, a range of strict rules kicks in. Providers (developers or brand owners of the AI), deployers (organizations using the AI in their operations), importers, and distributors each have duties. Below, we break down the key requirements.

Pre-Market Requirements

Before a high-risk AI system can be launched in the EU, providers must satisfy stringent pre-market obligations. This means:

  • Risk Management: Set up a comprehensive risk-management system (Article 9). Continuously identify, assess, and mitigate the risks of the AI (discrimination, safety, bias, etc.). Follow standards or best practices for AI risk management.

  • Data Quality and Governance: Ensure the training data is relevant, representative, free of errors, and biases (Article 10). Maintain records of data sources and preprocessing. DPO Consulting advises applying Privacy by Design Principles here.

  • Technical Documentation: Create and keep detailed technical documentation (Annex IV/Article 11). This includes system architecture, algorithms used, risk assessments, test results, intended purpose, and performance metrics. Essentially, all information is needed to show compliance. The provider must be ready to hand this over to the authorities on request.

  • Disclosure to Authorities: Compile the EU Declaration of Conformity and affix the CE marking to the AI system/product. This formalizes that you have met all the Act’s requirements. (Notably, if you are marketing the AI yourself or putting your trademark on it, you become the “provider” and bear full obligations.)

  • Record-Keeping and DPIA: Conduct a Data Protection Impact Assessment (DPIA) if your AI processes personal data (commonly the case). Keep logs of the AI’s operation (Article 19) and maintain records of testing and incidents. Providers should maintain a quality management system and keep a point of contact available. With the rising involvement of AI, it also becomes important to follow the GDPR and AI best practices.

  • Independent Conformity Assessment: Follow the mandated conformity assessment procedure (Article 43, Annexes VI/VII; see next section). This might mean a self-assessment or a notified-body review, depending on the system. Pass this step before market entry.

Put simply, requirements for high-risk AI systems demand rigorous design, documentation, and verification steps. Providers must prove before marketing that they have built safe, fair, and transparent AI. For example, they must provide clear user instructions and ensure human oversight measures. Failing to complete these pre-market steps means you cannot legally sell the AI in the EU.

Ongoing Requirements

EU AI Act high risk obligations continue after launch. Providers and deployers must actively monitor and maintain the AI system:

  • Post-Market Monitoring: Providers must establish a post-market monitoring system (Article 61). They collect and analyze system performance data, logs, and any incidents or malfunctions. If new risks emerge or performance degrades, providers must take corrective actions. For example, if a bias issue is discovered in the AI’s output, you must update the model or training process.

  • Updates and Changes: Significant updates or modifications to a high-risk AI require a new conformity assessment. The Act defines “substantial modification” broadly (changes in functionality or intended purpose). Even predetermined learning that alters behavior (outside what was declared) could trigger reassessment.

  • Incident Reporting: Both providers and deployers must report serious incidents or malfunctioning to authorities (Articles 60-62). For instance, if a high-risk AI system causes a safety issue or rights violation, authorities must be notified so they can coordinate a remedy.

  • Continual Transparency: Providers must keep all documentation up-to-date and available. If a deployer raises safety or non-compliance concerns, the provider must address them. Importantly, providers must “immediately inform competent authorities of any new risks or breaches”.

Deployers (organizations using high-risk AI) also have duties:

  • Use As Intended: Deployers must use the AI according to the provider’s instructions and ensure it is operated safely. This means following user manuals, not altering the system in unauthorized ways, and providing adequate human oversight. For example, if the AI is an exam grader, the school must have qualified humans review outcomes.

  • Notify Subjects: When an AI system (listed in Annex III) makes decisions affecting individuals (e.g., credit denial), deployers must inform those people that an AI is involved. Transparency obligations include disclosing how the AI works in understandable terms.

  • Logging and Review: Deployers must keep track of the AI’s outputs and their own actions (e.g., logs of decisions made and overrides). They should perform regular audits to check the AI’s performance and fairness over time.

  • Cooperate with Authorities: Both providers and deployers must cooperate with regulators and provide information on request. If a distributor or importer had concerns, deployers should ensure the systems they use are indeed compliant.

Overall, ongoing requirements emphasize vigilance. The EU AI Act expects a lifecycle approach: manage risks before, during, and after deployment. If a deployer introduces a new high-risk use or modifies the AI’s function, they become a “provider” and take on full obligations (so changes in use can also “move” a system to high-risk status).

Role of Importers, Distributors, and Users

Importers and distributors of high-risk AI also play a part in compliance:

  • Importers (bringing AI products into the EU) must verify that the AI has undergone the required conformity assessment and bears the CE mark. They must keep documentation (declaration of conformity, technical file) for 10 years and ensure storage/transport conditions maintain compliance. If they suspect non-compliance, they cannot place the AI on the EU market until issues are fixed.

  • Distributors (resellers) have similar duties: before selling, they must check that the AI has a CE mark and EU declaration. If a distributor finds an AI system that might not meet AI Act standards, they must halt the sale and inform the provider/importer. They must also ensure that storage and transport do not impair safety. If a problem is found post-sale, distributors must help withdraw or recall the system and inform authorities as needed.

Users (end-users under deployers) should be trained on the AI’s proper operation and be aware of its status as high-risk. While the law focuses on providers/deployers, in practice, it’s wise for all users to understand key compliance cues (e.g., CE mark, safety instructions).

Conformity Assessment Procedures for High-Risk AI

A central pillar of EU AI Act compliance is the conformity assessment process that providers must follow. In practice, there are two main routes, depending on how the AI is classified:

  • Internal Control (Self-Assessment, Annex VI): For most high-risk AI systems listed in Annex III (e.g., an AI algorithm for credit scoring, recruitment, or education), providers will carry out an internal conformity assessment. They follow the procedure in Annex VI, which involves verifying that they have met all requirements (risk management, documentation, etc.) but does not involve a notified body. Essentially, the provider checks itself against the rules in a quality-management-based process.

  • Third-Party Assessment (Annex I/EU Law): If existing EU laws cover the AI (Annex I cases), for example, an AI-controlled medical device or an AI safety component of a car, then the conformity assessment for that law applies. Often, this means engaging a notified body (an accredited reviewer) to examine the product. The notifier ensures that all AI Act requirements (Section 2 of Chapter III) are also met as part of that assessment. Notified bodies already approved under, say, the Medical Device Regulation can be used to certify compliance.

After completing the appropriate procedure, providers must draw up the EU Declaration of Conformity and affix the CE mark before marketing the system. This CE mark signals to EU users and authorities that the AI system meets all high-risk standards. 

Exemptions and Reclassifications

Not all AI use is covered by the high-risk regime. The Act carves out some exemptions and flexible rules:

  • R&D and Testing: AI systems solely used for scientific research, development, or as part of compliance/testing programs may be exempt from many obligations. For example, AI in a lab setting (not placed on the market or used commercially) generally falls outside the Act. Similarly, personal and non-professional AI use is exempt. If you plan to test a high-risk AI in a pilot or sandbox, you may use the regulatory sandbox framework (Article 57) or Article 60 provisions to limit certain obligations. Always document any such exemption claim carefully.

  • Article 6(3) Narrow Cases: As noted, even Annex III systems might not count as high-risk if they do very limited tasks with minimal effect. Examples include an AI that only suggests improvements to a completed human decision without altering it. In these cases, providers must document and justify that the system meets all exemption criteria.

  • Risk Level Changes: Over time, an AI’s risk status can change. If you redesign a system so that it no longer qualifies under Annex III or the safety-component test, you might legitimately drop it from the high-risk category. However, you must update your compliance documentation to reflect the new classification. Conversely, adding a high-risk feature or deploying the AI in a new high-risk context will make it high-risk; you must then comply from that point forward.

  • Substantial Modifications: As mentioned, any significant update to a high-risk AI (new features, major ML retraining, changed purpose) triggers a fresh conformity assessment, even if the provider was the same. This ensures that the updated system still meets all standards.

It is important to treat reclassification with caution. If your system moves from high-risk to low-risk, you must still be able to demonstrate that change to the authorities. This may involve performing new risk assessments, updating the Technical Documentation, or removing the CE mark. Likewise, moving from low-risk to high-risk (e.g., a free AI tool being commercialized in healthcare) immediately imposes the full compliance regime. Document all such transitions diligently.

How DPO Consulting Can Support High-Risk AI Compliance

Navigating the EU AI Act high risk requirements can be complex. DPO Consulting offers specialized services to help organizations comply effectively. We can help you with AI risk assessment, policy and procedure development, and continuous audit and update under EU AI regulation compliance services

We evaluate your AI systems to determine if they fall under “EU AI Act high risk.” This includes mapping to Annex III categories, assessing data privacy impacts, and identifying gaps to compliance. You’ll gain clarity on which AI is high-risk and what needs fixing. The conformity assessment prep is a crucial aspect of compliance related to high-risk AI. We help you from drafting the Technical Documentation (Annex IV) to liaising with notified bodies. Our experience with EU regulatory practice means we can help efficiently achieve the CE marking and prepare for EU declaration signing.

Our experts also deliver hands-on EU AI Act training for your leadership, developers, and compliance teams. We cover topics like the “requirements for high-risk AI systems,” DPIAs for AI, and linking AI compliance to broader frameworks (e.g., integrating What is Cybersecurity Governance into your cybersecurity strategy). Customized coaching ensures your staff is prepared to implement the new rules.

DPO Consulting can turn high-risk AI compliance from a headache into a managed process. Our holistic approach covers GDPR and AI, Privacy by Design, DPIAs, and AI governance. By partnering with us, organizations can not only meet their legal obligations but also build trust in their AI deployment.

Conclusion

The EU AI Act’s high-risk classification is a breakthrough moment for businesses using advanced AI. Any system that significantly affects health, safety, or fundamental rights must meet rigorous requirements – from risk assessments and documentation to transparency and oversight. Understanding whether your AI is a high-risk AI under the EU AI Act is the first step. If it is, you face a full compliance regime similar to existing safety regulations. The cost of ignoring these rules can be severe, so proactive adaptation is key.

Staying compliant with the EU AI Act, including all high-risk requirements for high-risk AI systems, ensures you minimize legal risk and bolster public trust. As the AI regulatory landscape evolves, organizations should build on best practices (e.g., those in GDPR and AI: Best Practices) and treat compliance as an ongoing program, not a one-time project. With the right approach and expertise, high-risk AI can be developed and used responsibly, unlocking innovation while protecting society.

FAQs

What qualifies an AI system as high‑risk under the EU AI Act?

An AI system is high‑risk if it either serves as a safety component in a regulated product requiring third‑party certification (per Article 6) or its intended use falls under one of the sensitive Annex III categories (e.g., healthcare, autonomous vehicles, credit scoring).

Are all biometric or facial recognition tools considered high‑risk?

No. Only systems that perform real‑time or remote biometric identification/categorization in public or sensitive contexts (e.g., law enforcement, border control) are high‑risk; private or limited uses (like unlocking your phone) are not.

What documentation is required for high‑risk AI systems?

Providers must maintain a full Technical Documentation dossier (Article 11/Annex IV), including system design, data governance, risk assessments, test results, user instructions, an EU Declaration of Conformity, and operational logs (Article 19).

Can a system move from high‑risk to low‑risk?

Yes. If its functionality or context changes so it no longer meets Article 6 or Annex III criteria, it can be reclassified, provided you document the change; conversely, adding a high‑risk use triggers full compliance again.

What penalties apply for non‑compliance with high‑risk AI obligations?

Serious breaches can incur fines up to €30 million or 6 percent of global turnover, orders to withdraw or disable the system, and reputational and liability risks under national enforcement.

References

DPO Consulting: Your Partner in AI and GDPR Compliance

Investing in GDPR compliance efforts can weigh heavily on large corporations as well as smaller to medium-sized enterprises (SMEs). Turning to an external resource or support can relieve the burden of an internal audit on businesses across the board and alleviate the strain on company finances, technological capabilities, and expertise. 

External auditors and expert partners like DPO Consulting are well-positioned to help organizations effectively tackle the complex nature of GDPR audits. These trained professionals act as an extension of your team, helping to streamline audit processes, identify areas of improvement, implement necessary changes, and secure compliance with GDPR.

Entrusting the right partner provides the advantage of impartiality and adherence to industry standards and unlocks a wealth of resources such as industry-specific insights, resulting in unbiased assessments and compliance success. Working with DPO Consulting translates to valuable time saved and takes away the burden from in-house staff, while considerably reducing company costs.

Our solutions

GDPR and Compliance

Outsourced DPO & Representation

Training & Support

Read this next

See all
Hey there 🙌🏽 This is Grained Agency Webflow Template by BYQ studio
Template details

Included in Grained

Grained Agency Webflow Template comes with everything you need

15+ pages

25+ sections

20+ Styles & Symbols

Figma file included

To give you 100% control over the design, together with Webflow project, you also get the Figma file. After the purchase, simply send us an email to and we will e happy to forward you the Figma file.

Grained Comes With Even More Power

Overview of all the features included in Grained Agency Template

Premium, custom, simply great

Yes, we know... it's easy to say it, but that's the fact. We did put a lot of thought into the template. Trend Trail was designed by an award-winning designer. Layouts you will find in our template are custom made to fit the industry after carefully made research.

Optimised for speed

We used our best practices to make sure your new website loads fast. All of the images are compressed to have as little size as possible. Whenever possible we used vector formats - the format made for the web.

Responsive

Grained is optimized to offer a frictionless experience on every screen. No matter how you combine our sections, they will look good on desktop, tablet, and phone.

Reusable animations

Both complex and simple animations are an inseparable element of modern website. We created our animations in a way that can be easily reused, even by Webflow beginners.

Modular

Our template is modular, meaning you can combine different sections as well as single elements, like buttons, images, etc. with each other without losing on consistency of the design. Long story short, different elements will always look good together.

100% customisable

On top of being modular, Grained was created using the best Webflow techniques, like: global Color Swatches, reusable classes, symbols and more.

CMS

Grained includes a blog, carrers and projects collections that are made on the powerful Webflow CMS. This will let you add new content extremely easily.

Ecommerce

Grained Template comes with eCommerce set up, so you can start selling your services straight away.

Figma included

To give you 100% control over the design, together with Webflow project, you also get the Figma file.