Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

This article provides an overview of ModelOp Center’s Model Monitoring approach, including the use of various metrics to enable comprehensive monitoring throughout the life cycle of a model.

Table of Contents

Out of the Box Metrics

ModelOp Center ships with multiple out-of-the-box monitors, which are registered as associated models. The user may add one of these associated monitors to his/her model or decide to write a custom metric function (see next section). These monitors can also be customized via the ModelOp monitoring Python package. See /wiki/spaces/dv33/pages/1978445995 for documentation on the monitoring package. Here is a sampling of out of the box tests and monitors:

Quality Performance:

Ensure that model decisions and outcomes are within established data quality controls, eliminating the risk of unexpected and inaccurate decisions. Quality performance monitors include:

  • Data drift of input data

  • Concept drift of output

  • Statistical effectiveness of model output

Risk Performance

Controlling risk and ensuring models are constantly operating within established business risk and compliance ranges as well as delivering ethically fair results is a constant challenge. Prevent out-of-compliance issues with automated, continuous risk performance monitoring. Risk performance monitors include:

  • Ethical fairness of model output

  • Interpretability of model features weighting

Next Article: /wiki/spaces/dv33/pages/1978437047 >

  • No labels