Introduction
The three Critical Requirements for a ModelOps Tech Solution, relative to models in production, are:
Ability to deploy, monitor and govern any model across all enterprise AI, regardless of their development tool or runtime environment
Ability to abstract the complexity of the enterprise AI stack, driving agility and scale in the enterprise’s operationalization of models in business
Ability to automate the model life cycle in the enterprise with repeatability, resilience and scale
ModelOps answer to these requirements is the ModelOp Command Center, where day-to-day operations, model monitoring, alert and notification response, and model retraining activities happen.
Modeling in the ModelOp Command Center takes place in three distinct phases. These phases may be done by different people with different roles in different locations. The phases include:
Preparation
Test and deploy a model
Monitor and update a model
Preparation
The Model: Metrics
The ModelOps Command Center design offers you the freedom to use the most effective model development tool for each application and use case.
Evaluating your model algorithm is an essential part of any model deployment. Data scientists choose evaluation metrics to determine the accuracy and statistical performance of the model. The choice of metric depends on the objective and the problem you are trying to solve. Some common metrics used in ModelOps sample models and examples include:
The F1 score
SHAP values
The ROC Curve
The AUC
A metrics function in the model can help automate and execute back tests of models. A metrics function can either be specified with a #modelop.metrics smartcomment before the function definition or selected within the UI after the model source code is registered.
There are two ways to create a metrics job manually:
In the Command Center, use the “Create a New Batch Job”. See Model Batch Jobs and Tests for details.
From the CLI. See Model Monitoring and Metrics for details.
The Model Life Cycle Process (MLC Process)
The MLC Process automates and regulates the models within the ModelOp Center. Each MLC Process is defined in the Camunda Modeler as a BPM file. The MLC Process can to models of a variety of scope . They can apply to an individual model or a set of models based on the team or language or framework they employ. easily modified to comply with governmental or industry regulations will have different requirements for compliance reporting than a similar (or even the same) model deployed in an unregulated application
MLC Processes leverage the standard elements of a Camunda BPM asset:
Signal events - events that initiate the process, triggered when a model is changed or based on a timer
Tasks - can be a user task, operator leveraging MOC functionality, etc.
Gateway - decision logic to control the flow based on model meta-data
The full documentation set of Camunda is available at https://camunda.com/bpmn/reference/.
Prepare a Runtime Environment
You can use the runtime environment provided by ModelOp, or ModelOp can integrate with runtime environment of your choice (e.g. Spark).
When you prepare a runtime, you configure it to be discoverable by the MLC Process operator, to encode it’s messages in the proper format, and to let the MLC Process know which model it supposed to go there.
You prepare the runtime by accessing the engine in the Runtimes section of the Command Center and providing the following details:
A name for the engine. You will put this name in the MLC Process service task
Endpoint type: REST or Kafka
Encoding: Avro Binary, CSV, jason, Json Binary Message Pack, or UTF-8 Binary
‘Tag’ the engine with the name of the model you will deploy in that engine.
For details about preparing a Runtime environment, see https://modelop.atlassian.net/wiki/spaces/VDP/pages/edit-v2/909082752
Register the Model with the Command Center
The Command Center has to know about your model before you can manage, test, and deploy it. This is done by registering the model.
For details about how to register your model using the Command Center, the command line interface, or a Jupyter plug in, see Register a Model .
Test and deploy a model
Test the metrics, code executions, bias, and governance of the model
Deploy the model to a ModelOp runtime environment, or set up their own runtime
Promote the model to a production environment
Monitor and update a model
Monitor the deployed model
Modify the model when the data changes