Is Your AI System High-Risk Under the EU AI Act? A Practical Classification Guide
The difference between "minimal risk" and "high-risk" under the EU AI Act is the difference between zero obligations and a 12-month compliance project. Getting this classification wrong in either direction is expensive — over-classify and you waste months on unnecessary documentation, under-classify and you face fines up to €15 million.
This guide gives you a practical decision tree to classify every AI system in your organization. No legal jargon. Real examples. Concrete next steps.
The Four Risk Categories
| Category | What it means | Your obligations | Example |
|---|---|---|---|
| Unacceptable | Banned outright | Decommission immediately | Social scoring by governments |
| High-risk | Significant impact on people's rights | Full compliance regime | AI screening job applicants |
| Limited risk | Interacts directly with people | Transparency/disclosure | Customer service chatbot |
| Minimal risk | Everything else | None | Spam filter, recommendation engine |
Most of your AI systems are minimal risk. The entire compliance effort concentrates on the handful that aren't.
Step 1: Is It Banned? (Article 5)
Before anything else, check whether your AI system falls into the "unacceptable risk" category. These are prohibited as of February 2, 2025 — not August 2026.
Prohibited practices:
- Social scoring — Evaluating people based on social behavior leading to detrimental treatment
- Exploitation of vulnerabilities — AI targeting people's age, disability, or economic situation
- Real-time remote biometric identification in public spaces by law enforcement (with narrow exceptions)
- Emotion recognition in the workplace or educational institutions
- Biometric categorization to infer race, political opinions, trade union membership, religious beliefs, or sexual orientation
- Untargeted facial image scraping from the internet or CCTV
- Predictive policing based solely on profiling
If any of your AI systems do this → decommission immediately. No exceptions, no phase-in period.
Step 2: Is It High-Risk? (Article 6)
High-risk classification comes from two sources:
Path A: Safety Component of a Regulated Product (Annex I)
If your AI system is a safety component of a product covered by EU harmonized legislation — medical devices, machinery, vehicles — it's high-risk automatically.
Path B: Standalone High-Risk Use Cases (Annex III)
This is where most organizations need to pay attention. Annex III lists eight areas:
- Biometrics — Remote biometric identification, biometric categorization, emotion recognition
- Critical Infrastructure — Safety components in digital infrastructure, AI controlling road traffic, water, gas, electricity
- Education and Vocational Training — AI determining access to education, evaluating learning outcomes, proctoring
- Employment — AI for recruitment screening, decisions on promotion/termination, performance evaluation
- Access to Essential Services — Credit scoring, insurance risk assessment, emergency call evaluation, patient triage
- Law Enforcement — Risk assessment of criminal behavior, polygraphs, evidence assessment, profiling
- Migration, Asylum, and Border Control — Risk assessment, document authentication, application examination
- Administration of Justice and Democratic Processes — AI for legal research applied to facts, election influence
Step 3: Check the Safety Valve (Article 6(3))
Not every AI system listed in Annex III is automatically high-risk. Article 6(3) provides a derogation. An Annex III system is NOT high-risk if it:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing human assessment
- Performs a preparatory task to an assessment
Critical caveat: This exception does NOT apply if the AI system performs profiling of natural persons.
If you use this exception, you must register your reasoning in the EU database.
Step 4: What About ChatGPT, Copilot, and Other GPAI?
If you're a deployer (you use ChatGPT/Copilot but didn't build it): The heavy obligations fall on the provider (OpenAI, Microsoft) — not you. Your obligation: use according to instructions, maintain logs, ensure transparency.
If employees use ChatGPT for general tasks: Typically minimal risk. But you still need AI literacy training (Article 4).
The integration trap: If you integrate GPAI into a system making decisions in an Annex III area (e.g., GPT-4 in a hiring tool), the resulting system is high-risk.
Real-World Classification Examples
| AI System | Classification | Why |
|---|---|---|
| AI resume screening tool | High-risk (Annex III, Cat. 4) | Makes employment decisions |
| Customer service chatbot | Limited risk (Art. 50) | Must disclose it's AI |
| Credit scoring model | High-risk (Annex III, Cat. 5) | Assesses creditworthiness |
| Internal document summarization | Minimal risk | No impact on rights |
| Medical imaging diagnostic AI | High-risk (Annex I) | Safety component of medical device |
| Email spam filter | Minimal risk | No significant impact |
| AI-powered proctoring software | High-risk (Annex III, Cat. 3) | Monitors student behavior |
| Product recommendation engine | Minimal risk | No significant impact on rights |
| AI insurance risk pricing | High-risk (Annex III, Cat. 5) | Affects access to insurance |
| Content moderation AI | Limited risk | Transparency obligations |
The Decision Tree (Summary)
START: Is your AI system deployed in the EU?
β
ββ NO β EU AI Act does not apply
β
ββ YES β Does it perform a prohibited practice (Art. 5)?
β
ββ YES β STOP. Decommission immediately.
β
ββ NO β Is it a safety component of an Annex I regulated product?
β
ββ YES β HIGH-RISK (Annex I path)
β
ββ NO β Is it used in an Annex III area?
β
ββ YES β Does Art. 6(3) exception apply?
β β
β ββ YES (and no profiling) β NOT HIGH-RISK
β β (but register your reasoning in EU database)
β β
β ββ NO β HIGH-RISK (Annex III path)
β
ββ NO β Does it interact directly with people?
β
ββ YES β LIMITED RISK (transparency obligations)
β
ββ NO β MINIMAL RISK (no obligations)
What to Do After Classification
If you found high-risk systems (most mid-market companies find 0–3):
- These systems need full compliance: documented risk management, technical documentation, quality management system, EU database registration, and post-market monitoring
- Start with the inventory — you can't register what you haven't cataloged
- Deadline: August 2, 2026 (unless the Omnibus delay is adopted)
If all your systems are minimal or limited risk:
- You still need AI literacy training (Article 4 — mandatory since February 2025)
- Chatbots need transparency disclosure
- Build an inventory anyway — boards, investors, and auditors are asking regardless
For everyone: An AI inventory is the foundation. You need to know what you have before you can classify it, govern it, or register it.
Model Inventory for Jira gives you a compliance-ready AI registry with built-in EU AI Act risk classification, dynamic risk tiering, and Annex VIII field mapping — all inside your existing Jira. Learn more →