(v1) Deploy a Model into a ModelOp Runtime

This article describes how to prepare the ModelOp Runtime and the MLC Process to deploy a model.

Table of Contents

 

Introduction

A “deployment” of a model in ModelOp Center involves pushing a model--and all of the relevant artifacts associated with that specific version of the model--to a ModelOp Runtime. A ModelOp Runtime is the model execution context that can be used for model training, testing, and/or scoring.

A “deployment” may consist of either:

  • Batch (Ephemeral) Deployment: pushing a model to a Runtime to execute a Batch Job, after which the Runtime is cleared of the model to be made available for other jobs.

  • Persistent Deployment: pushing a model to a Runtime that has persistent input and output endpoints for continual model scoring.

The remainder of this article will focus on Persistent Deployment.

Deployments are automated in ModelOp Center by building service tasks into an MLC Process. You can configure the service tasks to automatically locate the proper ModelOp Runtime and deploy the model, whether for Batch or Persistent Deployments. A particular example of this occurs when you run a Batch Scoring Job or Batch Metrics Job. The starter MLC Process, OnJobChange, automatically finds a designated Runtime for tests, deploys the model into that test Runtime, and runs the Scoring Function against the test data. In this way, ModelOp Center abstracts the complexity of “setting a model up” - configuring endpoints, deploying to a runtime, etc. - so that the data scientist can quickly and easily run a test.

For more information about Batch Deployments see Model Batch Jobs and Tests.

Deploying a Model into a Persistent ModelOp Runtime

You can deploy a model into a runtime provided by ModelOp or you can deploy into an external runtime environment of your choice (e.g. Spark). This article focuses on deploying into the ModelOp Runtime.

Regardless which runtime you choose, ModelOp streamlines the model deployment process once the model is ready. The typical steps include:

  1. Prepare ModelOp Runtime for a Persistent Deployment - ensure there is a Runtime with the correct dependencies, and set up the Runtime to integrate with the data pipeline.

  2. Define the MLC Process - an MLC Process automates and regulates the models registered with the ModelOp Center. For more information, see Model Lifecycle Manager: Automation.

  3. Submit a Model for Deployment - this triggers the MLC Process to guide the model through the tests and approvals to ensure the quality of the predictions the model provides to business.

1. Prepare ModelOp Runtime for a (Persistent) Deployment

A ModelOp Runtime must be configured with an input endpoint and an output endpoint. The input endpoint provides an entry point for records from the data pipeline to the model. The output endpoint is where the consuming data pipeline or application receives the output of the model scoring. When a record activates the input endpoint, the record is decoded based on the Encoding setting, and sent to the Scoring Function. The output of the Scoring Function for that record is then sent to the output endpoint. The type of endpoint dictates the scoring method: on-demand (e.g. REST) vs. streaming (e.g. Kafka). Batch scoring is accomplished using the Batch Job which can run in a specified Runtime for dependencies and environment differentiation.

To manually configure a Runtime:

Note: This can be automated using an MLC Process (see: Model Lifecycle Manager: Automation).

  1. Log on to the Command Center and navigate to Runtimes > Runtime Dashboard.

     

  2. Select the specific Runtime that you would like to configure.

  3. In the Tags field, type the name of the model you want to run in this ModelOp Runtime, press Enter, and then click Update Runtime.

  4. Click Endpoints > Add New Input Endpoint and set the following:

    1. (Optional) Name and Description

    2. Endpoint Type: select REST Endpoint, Kafka Subscriber etc.

    3. Encoding: select JSON, CSV, Binary, etc.

    4. (Optional) Port: 0, 1, 85535, etc.

  5. Click Update Runtime.

  6. Click Endpoints > Add New Output Endpoint and set the following:

    1. (Optional) Name and Description

    2. Endpoint Type: select REST Endpoint, Kafka Subscriber etc.

    3. Encoding: select JSON, CSV, Binary, etc.

    4. (Optional) Port: 0, 1, 85535, etc.

  7. Click Update Runtime.

2. Define the MLC Process

The MLC Process is typically set up to automate and regulate the deployment process across a team, group of models, or other grouping sharing the same standardized approach to ModelOps. See ModelOp Life Cycle Manager: Automation for how to set up the MLC Process that controls the path to production.

3. Submit a Model for Deployment

These instructions describe how to manually submit a model for deployment using the Command Center. This assume there is an MLC Process using Deployable Model Changed to trigger a deployment process as described in the On Model Changed example.

Note: Typically you deploy models with delegates in an MLC Process.

  1. Click on the Models tab and select the model you want to deploy.

  2. Click Submit Model.

  3. To confirm the model has deployed, click Model Details and look for it in the list.

  4. If the deployment failed, click Tasks and Alerts to view the associated task.

Related Articles

Next Article: Model Batch Jobs and Tests >