What Your AI Model Inventory Actually Needs: A Field-by-Field Guide
The most common failure mode for AI inventories isn't missing systems — it's tracking the wrong information. Organizations build a spreadsheet, add a few columns (name, owner, status), call it an inventory, and discover six months later that it answers none of the questions regulators actually ask.
The EU AI Act requires 13 specific data points for EU database registration (Annex VIII). SR 11-7 expects 17 categories of information. ISO 42001, NIST AI RMF, and every consulting firm from Deloitte to McKinsey have their own recommendations. Synthesizing all of this into a usable set of fields is a project in itself.
This article does that work for you. Twenty-four fields, organized into three tiers: must-have, should-have, and future-ready. Each field mapped to the regulation that requires it.
Why Fields Matter More Than You Think
An AI inventory serves three audiences, and each one needs different data:
- Regulators need structured, exportable data matching Annex VIII (EU AI Act) or SR 11-7 categories. If your inventory can't produce this on request, you're not compliant — you're organized.
- Your board and management need portfolio-level answers: How many AI systems? What risk tiers? Who owns them? Which ones process personal data? A pile of unstructured notes doesn't answer these questions.
- Your operational teams need actionable metadata: When is the next review due? What's the current performance? Who do I call when something breaks?
Most inventories serve only one of these audiences. The right set of fields serves all three.
Tier 1: Must-Have Fields (10 Fields)
These are non-negotiable. Without them, you can't meet basic regulatory requirements or answer fundamental governance questions.
1. System Name and Identifier
- What: Human-readable name + unique internal ID for traceability
- Required by: EU AI Act Annex VIII A.4, SR 11-7
- Why: You need an unambiguous way to reference each system — in registration forms, incident reports, audit findings, and conversation. "The AI thing in marketing" doesn't work.
2. Description and Intended Purpose
- What: What the system does, what business problem it solves, what it's used for
- Required by: EU AI Act Annex VIII A.5, Art. 9, SR 11-7
- Why: Annex VIII specifically asks for "intended purpose" and "supported components and functions." This is also the basis for risk classification — you can't assess risk without understanding purpose.
3. AI System Type
- What: Classification of the underlying technology: ML model, GenAI/LLM, rule-based system, hybrid, agentic AI
- Required by: EU AI Act Annex VIII A.6 (operating logic), SR 11-7 (design theory and methodologies)
- Why: Risk profiles differ dramatically by type. A rule-based pricing engine has different failure modes than an LLM-powered customer agent. Your governance cadence and monitoring approach depend on this.
4. Risk Classification
- What: Two fields — EU AI Act category (Prohibited / High-risk / Limited / Minimal) and internal risk tier (High / Medium / Low)
- Required by: EU AI Act Art. 6, SR 11-7 (risk tiering)
- Why: This drives everything. High-risk systems need full documentation, registration, and ongoing monitoring. Minimal risk systems need nothing. If you don't classify, you either over-invest or under-comply.
5. Lifecycle Status
- What: Current state: Draft / Active / Monitoring / Retired
- Required by: EU AI Act Annex VIII A.7, SR 11-7 (model status)
- Why: Regulators require you to track status across the lifecycle, including recently retired systems. A "retired" model that still runs in a forgotten production server is a compliance gap.
6. Business Owner
- What: Named individual accountable for the system's business value and compliance
- Required by: SR 11-7 (ownership and responsibilities), EU AI Act Art. 17 (QMS requires assigned responsibilities)
- Why: When the regulator asks "who is responsible for this AI system?" — you need a name, not a department. Business owner is the first line of defense in the three-lines-of-defense model.
7. Technical Owner
- What: Named individual responsible for the system's operation, maintenance, and performance
- Required by: SR 11-7, best practice (ModelOp MVG, NIST AI RMF)
- Why: The business owner knows why the system exists. The technical owner knows how it works. You need both.
8. Data Sensitivity
- What: Classification of input data: Public / Internal / Confidential / PII / PHI
- Required by: EU AI Act Art. 10 (data governance), GDPR
- Why: Data sensitivity is a primary input to risk assessment. A model processing public product data and one processing employee health records can't be governed the same way. This field also feeds your DPIA obligations.
9. Provider Information
- What: Who built or provides the AI system — internal team, vendor name, or both
- Required by: EU AI Act Annex VIII A.1 (provider identification), SR 11-7
- Why: Your role (provider vs. deployer) determines your obligation level under the AI Act. For vendor systems, you need to track their compliance status and documentation.
10. Compliance Status
- What: Current compliance state: Compliant / In Progress / Non-Compliant / Not Assessed
- Required by: EU AI Act Art. 17 (QMS documentation), best practice
- Why: The portfolio view. Your board wants to know: "What percentage of our AI systems are compliance-ready?" This field, combined with risk tier, gives you that answer instantly.
Tier 2: Should-Have Fields (8 Fields)
These fields move you from "checkbox compliance" to operational governance. They're what separate an inventory that collects dust from one that actually drives decisions.
11. Deployment Date
- What: When the system entered production
- Required by: EU AI Act Annex VIII A.10 (status), SR 11-7
- Why: Establishes your compliance timeline. Systems deployed before August 2, 2026 that undergo "significant changes" after that date trigger full compliance obligations.
12. Last Review Date
- What: When this system was last formally reviewed for performance, risk, and compliance
- Required by: SR 11-7 (ongoing monitoring), EU AI Act Art. 72
- Why: Without this, you can't enforce review cadences. Quarterly for high-risk, annually for the rest.
13. Next Scheduled Review
- What: When the next formal review is due
- Required by: Best practice (SR 11-7 cadence, EU AI Act Art. 9 iterative risk management)
- Why: Proactive governance vs. reactive. If you only know when you last reviewed, you'll miss the next one. This field powers your "overdue reviews" dashboard.
14. Human Oversight Level
- What: How much human control exists: Human-in-the-loop (human decides) / Human-on-the-loop (human monitors) / Autonomous (human not involved)
- Required by: EU AI Act Art. 14 (human oversight), SR 11-7
- Why: One of the strongest risk indicators. An autonomous credit scoring system requires different controls than one where a human reviews every decision. Also directly affects your risk tier calculation.
15. Business Impact
- What: Consequence of system failure: Low / Medium / High / Critical
- Required by: SR 11-7 (materiality assessment), best practice
- Why: Not all AI systems matter equally. A failed recommendation engine costs you conversions. A failed credit scoring model costs you regulatory action. Business impact, combined with data sensitivity and human oversight, is how you calculate dynamic risk tiers.
16. Geographic Scope
- What: Where the system is deployed and processes data: EU member states, US, global
- Required by: EU AI Act Annex VIII A.10 (member states), GDPR
- Why: Determines which regulations apply. An AI system deployed only in the US doesn't trigger EU AI Act obligations (unless it processes EU citizens' data in certain contexts).
17. Documentation Links
- What: Pointers to technical documentation, risk assessment, validation reports, FRIA, DPIA
- Required by: EU AI Act Art. 11 + Annex IV, Art. 18 (10-year retention), SR 11-7
- Why: The inventory is the index, not the library. Documents live elsewhere (Confluence, SharePoint, Google Drive). But without a centralized pointer from the inventory record, documents get orphaned.
18. Known Limitations
- What: Documented constraints, failure modes, and conditions where the system underperforms
- Required by: SR 11-7 (model limitations and assumptions), EU AI Act Art. 13 (transparency)
- Why: Every AI system has limitations. Documenting them isn't admitting weakness — it's demonstrating governance maturity. SR 11-7 is explicit: "Documentation should describe the model's limitations and assumptions."
Tier 3: Future-Ready Fields (6 Fields)
These fields prepare you for where AI governance is heading — agentic AI, dependency tracking, and board-level reporting.
19. Model Complexity
- What: How interpretable is the system: Simple (linear models, rules) / Moderate (tree-based, shallow networks) / Complex (deep learning) / Black-box (third-party API, no access to internals)
- Required by: Best practice (NIST AI RMF, Gartner)
- Why: Complexity affects both risk and governance approach. A black-box third-party model needs different validation strategies than an interpretable decision tree. Also a factor in dynamic risk tier calculation.
20. Version
- What: Current version of the model or system
- Required by: EU AI Act Annex IV (version history), SR 11-7 (change management)
- Why: Models change. A version field lets you track which version was deployed when, which version was validated, and whether the production version matches the approved version. Subtle but critical for audit.
21. EU AI Act Category
- What: Specific Annex III category (1-8) if classified as high-risk, or "Not Applicable"
- Required by: EU AI Act Art. 6, Annex III
- Why: Different high-risk categories may face different enforcement priorities. Also required context for EU database registration.
22. Regulatory Scope
- What: Which regulations apply to this system: EU AI Act / SR 11-7 / GDPR / ISO 42001 / NIST AI RMF / multiple
- Required by: Best practice
- Why: Organizations increasingly face multiple overlapping frameworks. A credit scoring model at a European bank might fall under EU AI Act, SR 11-7, AND GDPR simultaneously. This field prevents compliance gaps.
23. Model Dependencies
- What: What other AI systems or models does this one depend on? (For agentic AI: which tools, APIs, and sub-agents does it call?)
- Required by: McKinsey (agentic AI governance), EU AI Act (provider-deployer chain)
- Why: Agentic AI systems that orchestrate other models create dependency chains. If one component fails compliance, the whole chain is affected. McKinsey calls this the #1 governance question for agentic AI: "Can we inventory agents and their dependencies?"
24. Business Unit
- What: Organizational unit that operates the system
- Required by: Best practice (portfolio reporting)
- Why: Enables risk portfolio views by department — "Marketing has 12 AI systems, all minimal risk. HR has 3, two of which are high-risk." Essential for board reporting and resource allocation.
Common Mistakes
Starting with too many fields. If your first inventory has 55 columns, nobody will fill it out. Start with Tier 1 (10 fields), add Tier 2 once governance processes are running, add Tier 3 when you need them.
Free-text everything. An inventory where "risk tier" is free text contains values like "high", "High", "HIGH", "3", "moderate-high", and "TBD". Use dropdowns with predefined values. Every field that can be structured should be.
No audit trail. A spreadsheet where anyone can edit any cell without a trace is not a compliance tool. You need immutable change history — who changed what, when, and from what to what. Regulators expect this under Art. 12 and Art. 18.
Treating it as a one-time project. The inventory is only useful if it stays current. Build review cadences into your governance: quarterly check on high-risk systems, annual full review, event-driven updates on significant changes.
Ignoring vendor/third-party AI. Most mid-market companies have more third-party AI (Copilot, ChatGPT, Salesforce Einstein) than internal models. If your inventory only covers internally-built AI, you're missing most of your exposure.
From Fields to Compliance
The right fields turn your inventory from a list into a compliance engine:
- Annex VIII registration becomes an export, not a separate project — if you capture fields 1-10, you have 80% of the required data
- Risk portfolio dashboards are a query, not a consulting engagement — group by risk tier, filter by compliance status
- Review cadences are automated reminders, not calendar entries — "show me all high-risk systems where next review date is in the past"
- Board reporting is a filtered view, not a quarterly scramble — "total AI systems by risk tier, compliance status, and business unit"
The fields are the foundation. Everything else — risk management, monitoring, registration — builds on top.
Try Free for 30 DaysModel Inventory for Jira includes 24 compliance-ready fields mapped to EU AI Act and SR 11-7, dynamic risk tiering, and structured workflows — all inside your Jira. Learn more →