Blog
EU AI Act · Risk Classification

Is Your AI System High-Risk Under the EU AI Act? A Practical Classification Guide

The difference between "minimal risk" and "high-risk" under the EU AI Act is the difference between zero obligations and a 12-month compliance project. Getting this classification wrong in either direction is expensive — over-classify and you waste months on unnecessary documentation, under-classify and you face fines up to €15 million.

This guide gives you a practical decision tree to classify every AI system in your organization. No legal jargon. Real examples. Concrete next steps.

The Four Risk Categories

Category What it means Your obligations Example
Unacceptable Banned outright Decommission immediately Social scoring by governments
High-risk Significant impact on people's rights Full compliance regime AI screening job applicants
Limited risk Interacts directly with people Transparency/disclosure Customer service chatbot
Minimal risk Everything else None Spam filter, recommendation engine

Most of your AI systems are minimal risk. The entire compliance effort concentrates on the handful that aren't.

Step 1: Is It Banned? (Article 5)

Before anything else, check whether your AI system falls into the "unacceptable risk" category. These are prohibited as of February 2, 2025 — not August 2026.

Prohibited practices:

If any of your AI systems do this → decommission immediately. No exceptions, no phase-in period.

Step 2: Is It High-Risk? (Article 6)

High-risk classification comes from two sources:

Path A: Safety Component of a Regulated Product (Annex I)

If your AI system is a safety component of a product covered by EU harmonized legislation — medical devices, machinery, vehicles — it's high-risk automatically.

Path B: Standalone High-Risk Use Cases (Annex III)

This is where most organizations need to pay attention. Annex III lists eight areas:

  1. Biometrics — Remote biometric identification, biometric categorization, emotion recognition
  2. Critical Infrastructure — Safety components in digital infrastructure, AI controlling road traffic, water, gas, electricity
  3. Education and Vocational Training — AI determining access to education, evaluating learning outcomes, proctoring
  4. Employment — AI for recruitment screening, decisions on promotion/termination, performance evaluation
  5. Access to Essential Services — Credit scoring, insurance risk assessment, emergency call evaluation, patient triage
  6. Law Enforcement — Risk assessment of criminal behavior, polygraphs, evidence assessment, profiling
  7. Migration, Asylum, and Border Control — Risk assessment, document authentication, application examination
  8. Administration of Justice and Democratic Processes — AI for legal research applied to facts, election influence

Step 3: Check the Safety Valve (Article 6(3))

Not every AI system listed in Annex III is automatically high-risk. Article 6(3) provides a derogation. An Annex III system is NOT high-risk if it:

Critical caveat: This exception does NOT apply if the AI system performs profiling of natural persons.

If you use this exception, you must register your reasoning in the EU database.

Step 4: What About ChatGPT, Copilot, and Other GPAI?

If you're a deployer (you use ChatGPT/Copilot but didn't build it): The heavy obligations fall on the provider (OpenAI, Microsoft) — not you. Your obligation: use according to instructions, maintain logs, ensure transparency.

If employees use ChatGPT for general tasks: Typically minimal risk. But you still need AI literacy training (Article 4).

The integration trap: If you integrate GPAI into a system making decisions in an Annex III area (e.g., GPT-4 in a hiring tool), the resulting system is high-risk.

Real-World Classification Examples

AI System Classification Why
AI resume screening tool High-risk (Annex III, Cat. 4) Makes employment decisions
Customer service chatbot Limited risk (Art. 50) Must disclose it's AI
Credit scoring model High-risk (Annex III, Cat. 5) Assesses creditworthiness
Internal document summarization Minimal risk No impact on rights
Medical imaging diagnostic AI High-risk (Annex I) Safety component of medical device
Email spam filter Minimal risk No significant impact
AI-powered proctoring software High-risk (Annex III, Cat. 3) Monitors student behavior
Product recommendation engine Minimal risk No significant impact on rights
AI insurance risk pricing High-risk (Annex III, Cat. 5) Affects access to insurance
Content moderation AI Limited risk Transparency obligations

The Decision Tree (Summary)

START: Is your AI system deployed in the EU?
β”‚
β”œβ”€ NO β†’ EU AI Act does not apply
β”‚
└─ YES β†’ Does it perform a prohibited practice (Art. 5)?
    β”‚
    β”œβ”€ YES β†’ STOP. Decommission immediately.
    β”‚
    └─ NO β†’ Is it a safety component of an Annex I regulated product?
        β”‚
        β”œβ”€ YES β†’ HIGH-RISK (Annex I path)
        β”‚
        └─ NO β†’ Is it used in an Annex III area?
            β”‚
            β”œβ”€ YES β†’ Does Art. 6(3) exception apply?
            β”‚   β”‚
            β”‚   β”œβ”€ YES (and no profiling) β†’ NOT HIGH-RISK
            β”‚   β”‚   (but register your reasoning in EU database)
            β”‚   β”‚
            β”‚   └─ NO β†’ HIGH-RISK (Annex III path)
            β”‚
            └─ NO β†’ Does it interact directly with people?
                β”‚
                β”œβ”€ YES β†’ LIMITED RISK (transparency obligations)
                β”‚
                └─ NO β†’ MINIMAL RISK (no obligations)

What to Do After Classification

If you found high-risk systems (most mid-market companies find 0–3):

  1. These systems need full compliance: documented risk management, technical documentation, quality management system, EU database registration, and post-market monitoring
  2. Start with the inventory — you can't register what you haven't cataloged
  3. Deadline: August 2, 2026 (unless the Omnibus delay is adopted)

If all your systems are minimal or limited risk:

  1. You still need AI literacy training (Article 4 — mandatory since February 2025)
  2. Chatbots need transparency disclosure
  3. Build an inventory anyway — boards, investors, and auditors are asking regardless

For everyone: An AI inventory is the foundation. You need to know what you have before you can classify it, govern it, or register it.

Model Inventory for Jira gives you a compliance-ready AI registry with built-in EU AI Act risk classification, dynamic risk tiering, and Annex VIII field mapping — all inside your existing Jira. Learn more →

Classify and register your AI systems before the deadline

Model Inventory for Jira gives you a compliance-ready AI registry with EU AI Act risk classification, dynamic risk tiering, and Annex VIII field mapping — inside your existing Jira.

Try Free for 30 Days