This article describes how ModelOp Center enables ongoing Drift and Model Concept Drift Monitoring.
Table of Contents
Table of Contents |
---|
Introduction
Monitoring incoming data for statistical drift is necessary to track whether assumptions made during model development are still valid in a production setting. For instance, a data scientist may assume that the values of a particular feature are normally distributed or the choice of encoding of a certain categorical variable may have been made with a certain multinomial distribution in mind. Tests should be run routinely against batches of live data and compared against the distribution of the training data to ensure that these assumptions are still valid, and if the tests fail, then appropriate alerts are raised for the data scientist or ModelOps engineer to investigate.
ModelOp Center provides a number of Drift monitors out of the box, but also allows you to write your own drift monitor. The subsequent sections describe how to add a drift monitor (assuming an out-of-the-box monitor) and the detailed makeup of a drift monitor for multiple types of models.
Adding Drift Monitors
As background on the terminology and concepts used in the below, please read the Monitoring Concepts section of the Model overview documentation.
To add a drift monitor to your model, you will add an existing “associated” model to your model. Below are the steps to accomplish this. For tutorial purposes, these instructions use all out-of-the-box and publicly available content provided by ModelOp, focusing on the Consumer Linear Demo and its related assets.
Define thresholds for your model
As mentioned in the Monitoring Concepts article, ModelOp Center uses decision tables to define the thresholds within which the model should operate for the given monitor.
The first step is to define these thresholds. For this tutorial, we will leverage the example
Data-drift.dmn
decision table. This assumes that the out-of-the-box Data Drift Detector is used, which leverages Kolmorgov-Smirnoff to calculate changes in the distributions between production and training data, outputting p-values. Specifically, this drift detector ensures that the below critical features from the Consumer Linear Demo model are within specification.Repeat for the provided
Concept-drift.dmn
filePerformance-test.dmn
Save the files locally to your machine.
Associate Monitor models to snapshot
Navigate to the specific model snapshot
Using the Associated Models widget, create a data drift association
Use the provided data and the DMN you made in step 2.
Use the provided data and the DMN you made in step 2.
Click Save.
The monitor “associated model” will be saved and now ready to run against the model’s specific snapshot
Schedule the Monitor
Schedule. Monitors can be scheduled to run using your preferred enterprise scheduling capability (Control-M, Airflow, Autosys, etc.)
While the details will depend on the specific scheduling software, at the highest level, the user simply needs to create a REST call to the ModelOp Center API. Here are the steps:
Obtain the Model snapshot’s unique ID, which can be obtained from the Model snapshot screen. Simply copy the ID from the URL bar:
Example:
Within the scheduler, configure the REST call to ModelOp Center’s automation engine to trigger the monitor for your model:
Obtain a valid auth token
Make a call to the ModelOp Center API to initiate the monitor
Example:
Monitoring Execution: once the scheduler triggers the monitoring job, the relevant model life cycle will initiated the specific monitor, which likely includes:
Preparing the monitoring job with all artifacts necessary to run the job
Creating the monitoring job
Parsing the results into viewable test results
Comparing the results against the thresholds in the decision table
Taking action, which could include creating a notification and/or opening up an incident in JIRA/ServiceNow/etc.
Viewing Monitoring Notifications
Typically, the model life cycle that runs the monitor will create notifications, such as:
A monitor has been started
A monitor has run successfully
A monitor’s output (model test) has failed
These Notifications can be viewed in the home page of ModelOp Center’s UI:
Viewing Monitoring Job Results
All monitor job results are persisted and can be viewed directly by clicking the specific “result” in the “Model Tests” section of the model snapshot page:
Drift Monitor Details
As the same data set may serve several models, you can write one drift detection model to associate to several models. This association is made during the Model Lifecycle process. The drift model can compare the training data of the associated models to a given batch of data. The following is a simple example:
...
If the training data is too large to fit in memory, you can save summary statistics about the training data and save those as, e.g., a pickle file and read those statistics in during the init function of the drift model. The metrics function can contain other statistical tests to compare those statistics to the statistics of the incoming batch.
Spark Drift Model Details
A similar drift detection method may be used for PySpark models with HDFS assets by parsing the HDFS asset URLs from the parameters of the metrics function. The following is a simple example:
...