The EU AI Act is the world’s first comprehensive AI regulation. If your organization deploys AI in the EU, this is what you need to know.
The EU Artificial Intelligence Act (Regulation 2024/1689) is a risk-based regulatory framework that classifies AI systems into four categories: unacceptable risk (banned), high-risk (regulated), limited risk (transparency obligations), and minimal risk (no requirements).
If your organization develops, deploys, or imports AI systems used in the EU — regardless of where you are headquartered — you are in scope.
Real scenarios from companies like yours.
If you use AI assistants for internal productivity (emails, summaries, code suggestions) and they don’t make decisions that affect people’s rights, you are likely in the minimal risk category — no obligations beyond general transparency.
However, if AI assistants are used for HR decisions (screening CVs, evaluating employees), credit scoring, or customer-facing decisions, those specific use cases may be classified as high-risk and require a risk management system, documentation, and an inventory.
Credit scoring and fraud detection models used in financial services are explicitly listed as high-risk AI in Annex III of the EU AI Act. You need:
Step zero: you need an inventory of all these models. You can’t document, register, or manage what you haven’t catalogued.
It depends on what the agent does, not what it is. An AI agent that automates document processing is likely minimal risk. But an agent that makes or influences decisions about people — hiring, loan approvals, insurance claims, medical triage — falls under high-risk.
The key question: does the AI output affect someone’s rights, safety, or access to services? If yes, you need to treat it as high-risk regardless of whether it’s called an “agent”, “model”, or “automation”.
Yes. The EU AI Act has extraterritorial reach, similar to GDPR. If the output of your AI system is used in the EU — even if your company is headquartered in the US, UK, or elsewhere — you are in scope as a “deployer” or “provider”.
Start with an AI inventory — a single, structured register of every AI system in your organization. For each system, capture: what it does, who owns it, what data it uses, its risk classification, and its EU AI Act category.
If your team already uses Jira, Model Inventory for Jira gets you from zero to a working registry in an afternoon — including EU AI Act risk classification and compliance checklists.
You are a “deployer” under the EU AI Act. Deployers of high-risk AI systems have obligations too — you must ensure proper use, human oversight, and monitor the system in operation (Art. 26). You still need to know what AI you’re using and document it.
The provider (vendor) handles conformity assessment and technical documentation. But you can’t outsource accountability — if the AI makes a harmful decision in your context, you share responsibility.
Ban on social scoring, manipulative AI, real-time biometric identification (with exceptions).
General-Purpose AI model obligations. Transparency, documentation, copyright compliance.
Full compliance for Annex III high-risk AI: risk management, technical documentation, human oversight, accuracy, cybersecurity.
For high-risk AI systems, organizations must implement and maintain:
| Article | Requirement | Why inventory is prerequisite |
|---|---|---|
| Art. 9 | Risk management system | Can’t manage risk of systems you don’t know about |
| Art. 11 + Annex IV | Technical documentation | Must know what to document |
| Art. 17 | Quality management system | QMS must enumerate systems it covers |
| Art. 49 + Annex VIII | EU database registration | Must know what to register |
| Art. 72 | Post-market monitoring | Need a bounded set of systems to monitor |
The common denominator: none of these requirements can be met without knowing which AI systems you have. An AI inventory is step zero of EU AI Act compliance. Deloitte, PwC, KPMG, and Gartner all agree.
Model Inventory for Jira gives you a compliance-ready AI registry in your existing Jira instance. Register every AI system, classify risk (EU AI Act Art. 6), track lifecycle, and build an immutable audit trail — all in 30 seconds.
If your team already uses Jira, there is no new vendor, no procurement, no training. Your AI registry is one install away.
A week-by-week action plan to get your AI inventory and governance in place before the deadline.
Six qualities that separate good inventories from great ones.
How AI helps organizations reduce human error across industries.
The EU AI Act high-risk deadline is August 2, 2026. Start building your AI inventory today.
Get Started with Model Inventory for Jira