Blog
Banking · Model Risk Management

From SR 11-7 to EU AI Act: What Banks Already Doing MRM Need to Know

If you're a risk manager at a European bank, you've been running model risk management programs for years. SR 11-7 (or its OCC equivalent, Bulletin 2011-12) has been your framework since 2011. You have a model inventory. You run independent validations. You report to the board.

On April 17, 2026, US regulators issued revised interagency MRM guidance (OCC Bulletin 2026-13) — the first update in 15 years. Notably, they explicitly excluded GenAI and agentic AI from scope, with a separate request for information planned. Meanwhile, the EU AI Act's high-risk obligations take effect August 2, 2026, and they cover exactly the AI systems US regulators just sidestepped.

For European banks operating under both regimes, the question is: how much of this is new?

The answer — less than you think, but more than you'd like. Banks with mature MRM programs already cover roughly 80% of the EU AI Act's requirements for high-risk AI systems. The remaining 20% is where the gap bites.

This article maps the delta. What you already have, what you need to add, and where the two frameworks diverge.

The Good News: SR 11-7 Was Ahead of Its Time

SR 11-7 was written for statistical models in banking — credit risk, market risk, fraud detection. It didn't anticipate GenAI, agentic systems, or the EU AI Act. But its core principles map remarkably well to modern AI governance:

SR 11-7 Principle EU AI Act Equivalent
Model inventory covering all models in use, in development, and recently retired Art. 9 (risk management system) + Art. 49 (EU database registration)
Risk tiering based on materiality Art. 6 (risk classification)
Independent validation (2nd line of defense) Art. 17 (quality management system)
Documented model development and limitations Art. 11 + Annex IV (technical documentation)
Ongoing performance monitoring Art. 72 (post-market monitoring)
Board-level reporting on model risk Art. 17 (governance structures)
Change management and version control Art. 12 (record-keeping)

If your MRM program is mature, you already have governance structures, validation processes, inventory systems, and reporting cadences that the EU AI Act requires. You're not starting from zero.

The Gap: What SR 11-7 Doesn't Cover

1. Scope Explosion — From 50 Models to 500

This is the biggest gap, and it's not about compliance — it's about scale.

SR 11-7 defines "model" narrowly — quantitative methods applying statistical, economic, or financial theories. It explicitly excludes simple arithmetic and deterministic rule-based processes. A large bank might have 50-200 models in its MRM inventory — credit scoring, stress testing, IFRS 9 provisioning, market risk VaR, fraud detection. JPMorgan Chase alone reported 400 AI use cases in production in 2025.

The EU AI Act's definition of "AI system" is far broader — machine-based systems that generate predictions, content, recommendations, or decisions from inputs. This includes: Machine learning models (your existing MRM scope), GenAI tools (ChatGPT Enterprise, Copilot, internal LLM deployments), Rule-based AI systems with adaptive components, Vendor-provided AI (Salesforce Einstein, ServiceNow AI, Bloomberg AI), AI embedded in platforms your teams use without thinking of it as "AI".

A large bank realistically has 200-500+ AI systems when you count vendor AI and GenAI tools. The Evident AI Index shows top 50 global banks announced 173 new AI use cases in the past 12 months alone — 70% of which are GenAI or agentic AI, largely outside traditional MRM scope. Most of these are outside your current MRM perimeter.

The challenge: You need to register, classify, and assign ownership to systems that your MRM team has never tracked — owned by business units that have never interacted with model risk management.

2. EU-Specific Risk Classification

SR 11-7 risk tiering is based on materiality — financial impact, model complexity, portfolio size. Your bank probably uses 3-5 tiers (Tier 1 being most material).

The EU AI Act adds a completely different classification axis:

EU AI Act Category What it means SR 11-7 Equivalent
Prohibited Banned outright (social scoring, some biometrics) No equivalent
High-risk (Annex III) Specific use cases: credit scoring, HR screening, insurance risk Partially overlaps
Limited risk Transparency obligations (chatbots) No equivalent
Minimal risk No obligations No equivalent

The complication: A model can be SR 11-7 Tier 3 (low materiality) but EU AI Act high-risk (it's used in credit scoring). Or SR 11-7 Tier 1 (high materiality market risk model) but EU AI Act minimal risk (no Annex III category applies). The two classification schemes don't align.

What you need to add: A separate EU AI Act category field in your inventory, with classification logic based on Annex III use-case mapping — not your existing materiality scoring.

3. Fundamental Rights Impact Assessment (FRIA)

SR 11-7 doesn't have this concept. The EU AI Act requires deployers of high-risk AI to assess the system's impact on fundamental rights — before deployment.

For banks, this primarily affects: Credit scoring and loan decisions (access to essential services), AI in HR processes (employment rights), Customer-facing AI that processes personal data (privacy rights).

A FRIA is not a DPIA (though they overlap). It specifically assesses whether the AI system could negatively affect rights like non-discrimination, privacy, access to services, or freedom of expression.

4. EU Database Registration (Annex VIII)

SR 11-7 has no external registration requirement. Your model inventory is internal.

The EU AI Act requires providers and deployers of high-risk AI to register each system in a public EU database. For providers, that's 13 mandatory fields (Annex VIII, Section A). For deployers (which banks typically are for vendor AI), the requirements include FRIA findings and DPIA summaries.

5. Transparency Obligations for Limited-Risk AI

If your bank deploys chatbots, virtual assistants, or AI-generated customer communications, these fall under "limited risk" transparency requirements — users must be informed they're interacting with AI.

6. AI Literacy Training (Article 4)

Article 4 of the EU AI Act requires "sufficient AI literacy" for staff and operators. This has been in force since February 2, 2025.

The Mapping Table: SR 11-7 → EU AI Act

Requirement SR 11-7 EU AI Act Gap
Central model/AI inventory ✓ Required ✓ Required Scope expansion
Risk tiering ✓ Materiality-based ✓ Use-case-based (Annex III) Different logic — need both
Independent validation ✓ 2LoD validation ✓ Art. 17 QMS ✓ Covered
Technical documentation ✓ Model documentation ✓ Art. 11, Annex IV Partial gap — Annex IV more prescriptive
Ongoing monitoring ✓ Performance monitoring ✓ Art. 72 ✓ Largely covered
Board reporting ✓ Aggregate model risk ✓ Art. 17 governance ✓ Covered
Change management ✓ Version control ✓ Art. 12 record-keeping ✓ Covered
Audit trail ✓ Required ✓ Art. 12, Art. 18 ✓ Covered
EU database registration ✓ Art. 49, Annex VIII New requirement
FRIA ✓ Required for deployers New requirement
Risk classification (Annex III) ✓ Art. 6 New requirement
Transparency ✓ Art. 50 New requirement
AI literacy training ⚠ Model-user training ✓ Art. 4 Scope expansion
Prohibited practices check ✓ Art. 5 New requirement

What to Do: A Practical Sequence for Banks

Step 1: Extend Your Inventory Scope (Weeks 1-4)

Your MRM inventory covers quantitative models. Extend it to all AI systems — including vendor AI, GenAI tools, and embedded AI.

Step 2: Add EU AI Act Classification (Week 5)

For each AI system, add the EU AI Act risk category alongside your existing SR 11-7 tier. Don't replace your materiality scoring — you need both.

Step 3: FRIA for High-Risk Systems (Weeks 6-8)

For systems classified as high-risk under Annex III, prepare Fundamental Rights Impact Assessments.

Step 4: Annex VIII Registration Prep (Week 9)

Map your inventory fields to Annex VIII requirements.

Step 5: Compliance Status Reporting (Week 10)

Build a dashboard that shows your board both views: SR 11-7 model risk exposure AND EU AI Act compliance status.

The Convergence Opportunity

SR 11-7, the EU AI Act, the UK's SS1/23, NIST AI RMF, and ISO 42001 are all converging on the same themes: risk-tiered oversight, documented validation, continuous monitoring, and centralized inventory. If you build your AI governance infrastructure to serve multiple frameworks simultaneously, you invest once and comply everywhere.

The banks that move fastest — extending their MRM capabilities rather than building parallel governance structures — will have a structural advantage. They already have the governance muscle. The EU AI Act just asks them to flex it wider.

Model Inventory for Jira supports both EU AI Act and SR 11-7 frameworks in a single registry — with dual risk classification, compliance templates, and governance workflows built on 7 years of banking MRM experience. Learn more →

Extend your MRM program to the EU AI Act

Model Inventory for Jira gives you dual-framework risk classification (SR 11-7 + EU AI Act) with governance workflows built on 7 years of banking MRM experience — inside your existing Jira.

Try Free for 30 Days