Interpretability (Shap) with ModelOp Runtime
This article describes how ModelOp Center enables model interpretability/explainability monitoring.
Table of Contents
Introduction
Enterprises need visibility into how models are making predictions. Mission-critical decisions cannot be made using a black box. Teams need to understand and explain model outputs. One such method is by understanding how each of the input features is impacting the outcome.
ModelOp Center provides a framework for calculating, tracking, and visualizing Model Interpretability metrics. Each of these can be determined on a model-by-model basis. You can also enforce a standard using an MLC Process as needed. The subsequent sections provide more detail on how to use ModelOp Center to implement Interpretability into your ModelOps program.
Interpretability in MOC
While model interpretability is a complex and rapidly-changing subject, ModelOp Center can assist you in understanding how much each feature contributes to the prediction, as well as monitoring each feature’s contribution over time. ModelOp Center does this by expecting a trained SHAP explainer artifact and finding the SHAP values over the input dataset. The SHAP results are persisted for auditability and tracking over time.
Required inputs
The following are required in order to execute a Shap job:
Shap explainer: must be created specifically using the trained model for which you want to calculate the shap values
List of input features (as a pkl) that are used in the trained model
Data set that contains the input features and model output using the same trained model
The following image shows the corresponding visualization for the SHAP values of the sample model to the Test Results in ModelOp Center.