Automated risk model operations at scale

04. February 2021

The MRM methodology is relatively well-defined, including the concept of model monitoring for particular model types. But how can we take these mostly manual procedures and deploy them at scale for the entire portfolio?

The scaling issue

Once you’ve defined an ongoing monitoring process as part of your model governance framework (for example according to SR 11-7 ), the next step is to introduce: 

  • an effective solution for the execution of a quantitative monitoring procedure across the entire portfolio
  • automatic execution of decisions like risk tiering
  • generation of corresponding documentation

A growing model portfolio doesn’t allow us to easily perform manual, one-on-one steps of the monitoring process. This creates a need for new concepts like automatic execution of the monitoring procedure, including further evaluation and the generation of obligatory monitoring documentation.

These concepts lead to the reduction of repetitive work and the number of manual, error-prone tasks.

The MGS platform enables you to solve these tasks even without involvement of an IT development team. Users can create automation which can execute these tasks in the background.

Let automation work for you

Our automation can execute defined processes - in other words it can go through a previously defined path and, at every step, execute a given task or make prerequisite decisions.

Let's go through an example of the process that our automation starts when a new monitoring data set is uploaded to an inbox folder. 

The automation helper must:

  • start the process automatically after a new monitoring data set is uploaded
  • perform a data quality check and notify the data provider if it identifies any incorrect data
  • start the previously defined monitoring process and generate a monitoring report based on a template (the result is saved as an MS Word and PDF document)
  • save the input data and monitoring procedure parameters in a way that the monitoring can be easily reproduced
  • evaluate the result using a configurable decision tree and notify model owners in the case of a high internal risk score

... and all that in an isolated, secured, and reproducible environment.

Implementation using the MGS platform

The main process is defined using a visual "language" compatible with BPM standards. Particular actions are set as steps executing a defined procedure as well as transitions between the steps.

In Procedure 1, our example uses a short script in the programming language called python, often used by data analysts. This script can verify a quality of a data set uploaded to the storage. 

Typical problems are a missing variable, value out of range, incorrect variable type, or missing value. If the procedure detects errors, the decision gateway sends the automation to a branch which then reports a DQ error back to the provider.

If the data is validated, the process continues on to Procedure 2. This procedure is implemented by the powerful Jupyter Notebook technology. The notebook template itself can be developed and adjusted by your analysts according to your methodology directly in the MGS environment. 

The Jupyter notebook procedure typically combines the latest data and the monitoring and backtesting functions defined according to your standards. The resulting report is exported to MS Word, PDF, or a standalone HTML output and saved to the dedicated model storage.

When the risk score produced by an internal scorecard result is above the defined threshold, the process goes to Procedure 3 where a follow-up validation task is assigned to a responsible team member. The model's internal risk score is also set to the model's metadata.

The MGS platform can easily scale up the above mentioned process to hundreds or thousands of models of your portfolio. Processes like the one in our example are executed automatically based on events such as: 

  • monitoring data availability
  • the scheduler starting at a given date and time
  • triggered by an external system controlled through open APIs
  • or manually by a user in GUI

The MRM team can focus on manual reviews of models which the automation has marked as risky. Even that step is easier thanks to our automation’s work. You’ll have already-prepared projects and data which need only be opened and reviewed manually.

Learn more about our Model Governance Suite