EU AI ACT

EU AI Act Compliance

The EU AI Act is the world’s first comprehensive AI regulation. If your organization deploys AI in the EU, this is what you need to know.

Overview

What Is the EU AI Act?

The EU Artificial Intelligence Act (Regulation 2024/1689) is a risk-based regulatory framework that classifies AI systems into four categories: unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), and minimal risk (no requirements).

If your organization develops, deploys, or imports AI systems used in the EU — regardless of where you are headquartered — you are in scope.

Who is affected?

Common Questions

Does the EU AI Act Apply to Me?

Real scenarios from companies like yours.

If you use AI assistants for internal productivity (emails, summaries, code suggestions) and they don’t make decisions that affect people’s rights, you are likely in the minimal risk category — no obligations beyond general transparency.

However, if AI assistants are used for HR decisions (screening CVs, evaluating employees), credit scoring, or customer-facing decisions, those specific use cases may be classified as high-risk and require a risk management system, documentation, and an inventory.

Credit scoring and fraud detection models used in financial services are explicitly listed as high-risk AI in Annex III of the EU AI Act. You need:

  • A risk management system (Art. 9)
  • Technical documentation for each model (Annex IV)
  • A quality management system (Art. 17)
  • Registration in the EU database (Art. 49)
  • Human oversight measures (Art. 14)

Step zero: you need an inventory of all these models. You can’t document, register, or manage what you haven’t catalogued.

It depends on what the agent does, not what it is. An AI agent that automates document processing is likely minimal risk. But an agent that makes or influences decisions about people — hiring, loan approvals, insurance claims, medical triage — falls under high-risk.

The key question: does the AI output affect someone’s rights, safety, or access to services? If yes, you need to treat it as high-risk regardless of whether it’s called an “agent”, “model”, or “automation”.

Yes. The EU AI Act has extraterritorial reach, similar to GDPR. If the output of your AI system is used in the EU — even if your company is headquartered in the US, UK, or elsewhere — you are in scope as a “deployer” or “provider”.

Start with an AI inventory — a single, structured register of every AI system in your organization. For each system, capture: what it does, who owns it, what data it uses, its risk classification, and its EU AI Act category.

If your team already uses Jira, Model Inventory for Jira gets you from zero to a working registry in an afternoon — including EU AI Act risk classification and compliance checklists.

You are a “deployer” under the EU AI Act. Deployers of high-risk AI systems have obligations too — you must ensure proper use, human oversight, and monitor the system in operation (Art. 26). You still need to know what AI you’re using and document it.

The provider (vendor) handles conformity assessment and technical documentation. But you can’t outsource accountability — if the AI makes a harmful decision in your context, you share responsibility.

Timeline

Key Deadlines

February 2, 2025

Prohibited AI Practices

Ban on social scoring, manipulative AI, real-time biometric identification (with exceptions).

August 2, 2025

GPAI Rules

General-Purpose AI model obligations. Transparency, documentation, copyright compliance.

August 2, 2026 — upcoming

High-Risk AI Systems

Full compliance for Annex III high-risk AI: risk management, technical documentation, human oversight, accuracy, cybersecurity.

Requirements

What the Regulation Requires

For high-risk AI systems, organizations must implement and maintain:

Article Requirement Why inventory is prerequisite
Art. 9 Risk management system Can’t manage risk of systems you don’t know about
Art. 11 + Annex IV Technical documentation Must know what to document
Art. 17 Quality management system QMS must enumerate systems it covers
Art. 49 + Annex VIII EU database registration Must know what to register
Art. 72 Post-market monitoring Need a bounded set of systems to monitor

The common denominator: none of these requirements can be met without knowing which AI systems you have. An AI inventory is step zero of EU AI Act compliance. Deloitte, PwC, KPMG, and Gartner all agree.

Penalties

Non-Compliance Is Not a Technicality

€35M
or 7% of global turnover
Prohibited AI practices (Art. 5)
€15M
or 3% of global turnover
High-risk AI obligations, GPAI
€7.5M
or 1% of global turnover
False information to authorities
Our Solution

Start with an AI Inventory

Model Inventory for Jira gives you a compliance-ready AI registry in your existing Jira instance. Register every AI system, classify risk (EU AI Act Art. 6), track lifecycle, and build an immutable audit trail — all in 30 seconds.

If your team already uses Jira, there is no new vendor, no procurement, no training. Your AI registry is one install away.

Learn More

Related Articles

Resources

Official Sources

Don’t Wait for the Deadline

The EU AI Act high-risk deadline is August 2, 2026. Start building your AI inventory today.

Get Started with Model Inventory for Jira