Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Overview

This article is a high-level survey of the ModelOp system, and the stages you go through to create, deploy and monitor your models.

Files | Questions?

Table of Contents

Introduction

The three Critical Requirements for a ModelOps Tech Solution, relative to models in production, are:

  • Ability to deploy, monitor and govern any model across all enterprise AI, regardless of their development tool or runtime environment

  • Ability to abstract the complexity of the enterprise AI stack, driving agility and scale in the enterprise’s operationalization of models in business

  • Ability to automate the model life cycle in the enterprise with repeatability, resilience and scale

ModelOps answer to these requirements is the ModelOp Command Center, where day-to-day operations, model monitoring, alert and notification response, and model retraining activities happen.

Modeling in the ModelOp Command Center takes place in three distinct phases. These phases may be done by different people with different roles in different locations. The phases include:

  1. Preparation

  2. Test and deploy a model

  3. Monitor and update a model

Preparation

The model: Metrics

The ModelOps Command Center design offers you the freedom to use the most effective model development tool for each application and use case.

Evaluating your model algorithm is an essential part of any model deployment. Data scientists choose evaluation metrics to determine the accuracy and statistical performance of the model. The choice of metric depends on the objective and the problem you are trying to solve. Some common metrics used in ModelOps sample models and examples include:

  • The F1 score 

  • SHAP values

  • The ROC Curve

  • The AUC

A metrics function in the model can help automate and execute back tests of models. A metrics function can either be specified with a #modelop.metrics smartcomment before the function definition or selected within the UI after the model source code is registered.

There are two ways to create a metrics job manually:

The Model Life Cycle Process (MLC Process)

The MLC Process automates and regulates the models within the ModelOp Center. Each MLC Process is defined in the Camunda Modeler as a BPM file. The MLC Process can to models of a variety of scope . They can apply to an individual model or a set of models based on the team or language or framework they employ. easily modified to comply with governmental or industry regulations will have different requirements for compliance reporting than a similar (or even the same) model deployed in an unregulated application

MLC Processes leverage the standard elements of a Camunda BPM asset:

  • Signal events - events that initiate the process, triggered when a model is changed or based on a timer

  • Tasks - can be a user task, operator leveraging MOC functionality, etc.

  • Gateway - decision logic to control the flow based on model meta-data

The full documentation set of Camunda is available at https://camunda.com/bpmn/reference/.

Prepare a Runtime Environment

You can use the runtime environment provided by ModelOp, or ModelOp can integrate with runtime environment of your choice (e.g. Spark).

Once the model is deployed in the runtime, you will:

  1. Add in the data access/ endpoint for REST

  2. Set the encoding <this can also be done in the <operator>

  3. Tag the engine with the appropriate tag (This can also be done in the <operator>

Register the Model with the Command Center

The Command Center has to know about your model before you can manage, test, and deploy it. This is done by registering the model using the Command Center UI, the command line interface, or a Jupyter plug in.

For details about how to register your model, see Register a Model .

Test and deploy a model

  • Test the metrics, code executions, bias, and governance of the model

  • Deploy the model to a ModelOp runtime environment, or set up their own runtime

  • Promote the model to a production environment

Monitor and update a model

  • Monitor the deployed model

  • Modify the model when the data changes

 

Related Articles

  • No labels