Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This article provides an overview of ModelOp Center’s Model Monitoring approach, including the use of various metrics to enable comprehensive monitoring throughout the life cycle of a modellists the out-of-the-box tests and monitors that can be run against a variety of model implementation types: LLM, NLP, regressions, classifiers, etc.

Table of Contents

Table of Contents

Out of the Box Metrics

ModelOp Center ships with multiple out-of-the-box monitors, which are registered as associated models. The user may add one of these associated monitors to his/her model or decide to write a custom metric function (see next section). These monitors can also be customized via the ModelOp monitoring Python package. See /wiki/spaces/dv33/pages/1978445995 here for documentation on the monitoring package. Here is a sampling of out of the box tests and monitors:

Quality Performance:

Ensure that model decisions and outcomes are within established data quality controls, eliminating the risk of unexpected and inaccurate decisions. Quality performance monitors include:

  • Data drift of input data

  • Concept drift of output

  • Statistical effectiveness of model output

Risk Performance

Controlling risk and ensuring models are constantly operating within established business risk and compliance ranges as well as delivering ethically fair results is a constant challenge. Prevent out-of-compliance issues with automated, continuous risk performance monitoring. Risk performance monitors include:

  • Ethical fairness of model output

  • Interpretability of model features weighting

Next Article: /wiki/spaces/dv33/pages/1978437047 image-20240716-212408.pngImage Added

For more details, please visit the ModelOp Center Monitoring Package section (subscribed Customers only).

Next Article: Monitoring & Reporting - How To's >