Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 3 Current »

This article describes how ModelOp Center enables model interpretability/explainability monitoring.

Table of Contents

Introduction

Enterprises need visibility into how models are making predictions. Mission-critical decisions cannot be made using a black box. Teams need to understand and explain model outputs. One such method is by understanding how each of the input features is impacting the outcome.

ModelOp Center provides a framework for calculating, tracking, and visualizing Model Interpretability metrics. Each of these can be determined on a model-by-model basis. You can also enforce a standard using an MLC Process as needed. The subsequent sections provide more detail on how to use ModelOp Center to implement Interpretability into your ModelOps program.

Interpretability in MOC

While model interpretability is a complex and rapidly-changing subject, ModelOp Center can assist you in understanding how much each feature contributes to the prediction, as well as monitoring each feature’s contribution over time. ModelOp Center does this by expecting a trained SHAP explainer artifact and finding the SHAP values over the input dataset. The SHAP results are persisted for auditability and tracking over time.

In the sections below we will demonstrate how an explainability monitor can be setup in MOC. The example used is based on the German Credit business model, which can be found here. The business model is the base model producing inferences (predictions). The SHAP monitor can be found here. The monitor is itself a storedModel in MOC.

Adding a SHAP monitor to a business model

To add the SHAP monitor referenced above to the German Credit (GC) model, follow these steps:

  1. Import the GC model into MOC by Git reference.

  2. Create a snapshot of the GC model. In doing so, it is not necessary to deploy the snapshot to any specific runtime, as we will not be running any code from the GC model itself.

  3. Import the Feature Importance monitor by Git reference.

  4. Create a snapshot of the monitor. You may specify a target runtime on which you’d like the monitoring job to be run.

  5. Associating the monitor to the business model:

    1. Navigate to the GC model spapshot,a nd click on ADD, then on MONITOR.

    2. Select the SHAP monitor from the list of monitors, and then select its snapshot.

    3. On the Assets page:

      1. Select an input asset from the existing data assets of the GC business model, say, df_sample_scored.json. In a PROD environemnt, you are more likely to add a SQL asset (by connection and query string), or an S3 asset (by URL).

      2. under “Additional Assets”, select the existing assets shap_explainer.pickle for Pre-trained SHAP explainer, and predictive_features.pickle for List of predictive features as used by scoring model.

      3. Click Next.

    4. For this example we will skip the Threshold and Schedule steps.

    5. On the last page, click Save.

  6. To run the monitor, navigate to the snapshot of the GC model, and then to monitoring.

  7. Under the list of monitors, click the play button next to the SHAP monitor to start a monitoring job. You will then be re-directed to the Jobs page, where you’ll be able to look at live logs from the monitoring run.

  8. To download the Test results, click on Download File in the Details box. To view a graphical representation of the test result, click on the Model Test results UUID link in the Details box. You will be re-directed to the snapshot’s monitoring page, and presented with the following output:

Next Article: Model Governance: Model Versioning >

  • No labels