Operationalizing Models: Overview

Background

AI/ML/all Models serve as critical decisioning assets for an organization. They are meant to provide scores and inferences for a variety of different business applications and processes. Given that modern enterprises are moving to a heterogenous analytics ecosystem, inclusive of large batch-based processing, online micro service-based architectures, and streaming architectures, AI models need to be deployed in multiple consumption modes across multiple different architectures. These consumption modes may vary throughout a model’s life cycle (e.g. deployed as batch in a QA environment, but deployed as REST in production). As well, a model may leveraged by multiple different business processes, in which case, the same model may be deployed as batch for one business process and deployed as streaming for an entirely different process.

Managing Production Deployments in ModelOp Center

ModelOp Center was designed to account for the unique ModelOps challenges of the modern enterprise, providing flexibility to support the variety of production deployment patterns and IT architectures. ModelOp Center’s powerful architecture helps enterprises future-proof their AI investments by allowing organizations to easily add new AI development technologies, data platforms, or model execution environments, while still maintaining the required Governance, Management, and Monitoring required to support the SLA’s required for these critical AI decisioning assets.

To explain how ModelOp Center achieves this, let’s start with a few of the fundamental concepts of ModelOp Center as it relates to managing production deployments.

ModelOp Center Terminology

  1. Model Execution Mode: a model “deployment” may consist of either:

    • Batch (Ephemeral) Deployment: pushing a model to a Runtime to execute a Batch Job, after which the Runtime is cleared of the model to be made available for other jobs.

    • Online (Persistent) Deployment: pushing a model to a Runtime that has persistent input and output endpoints for continual model scoring.

  2. Model Runtime: a model runtime (sometimes referred to as “model serving”) is an environment in which a model--or model metrics--can be executed. A model is physically deployed into a runtime for execution, which, depending on the type of deployment, may be an ephemeral deployment (e.g. for a Batch job) or a long-running process (e.g. REST).

    1. Examples: Python-enabled Docker container, Spark, Sagemaker, Azure ML Studio, DataIku, Domino Data Labs

  3. Runtime Endpoints: when a model is deployed for online inferences (REST, streaming), a model runtime requires endpoints to which the requesting/consuming application can connect.

    1. Examples:

      1. REST: models deployed as REST require endpoints to allow for the synchronous REST-based communication

      2. Kafka: models may receive inference requests from a Kafka topic and may push the model output (inferences) back onto a Kakfa topic. Endpoints are required to enable this integration

  4. Runtime Environments (“Stages”): within a typical enterprise, there are multiple environments to facilitate development, testing, production staging, production, production-failover, etc. Depending on the life cycle of a model, the model may be required to move through one or more of these environments as part of standard IT/QA/Compliance processes.

    1. Environment/Stages Tags: given that each enterprise has multiple execution platforms (e.g. Spark, Docker, etc.), each of which might have different runtime environments/stages, ModelOp Center uses the concept of “Environment/Stage tags”. This allows the ModelOps engineer to tag that a particular environment is to be used for Development, while another might be used for UAT, or SIT or Production, etc. Through this approach, a MLC can be designed generically so that all models are automatically deployed and tested through each of the required stages in the standard IT/QA/Compliance process.

      1. Example: IT requires that a model be deployed and tested in a Development, then SIT, and then UAT environment before it can be deployed to production. The ModelOps engineer would therefore want to add a “DEV” tag to the Runtime in their development environment, an “SIT” tag to the Runtime in their SIT environment, a “UAT” tag to the Runtime in their UAT environment, and ultimately a “PROD” tag to the Runtime in their Prod environment. The MLC can then orchestrate the promotion of the model automatically across those stages, deploying the model to the appropriate runtime based on the Environment/Stage tag.

  5. Model Service: as mentioned above, a model may be “servicing” (providing scores/inferences) for multiple consuming business applications or processes.

    1. Model Service Tags: Therefore, from a Production Model Management perspective, ModelOp Center tracks these models by the use of “Model Service” tags to allow models to be enabled in multiple environments, potentially utilizing different consumption modes (e.g. Batch vs. REST), but ModelOp Center still is able to track and maintain exactly where each and every version of a model is running in production. The Model Life Cycle (MLC) uses the “Model Service” tag to determine exactly where a model should be deployed throughout its life cycle.

      1. Examples:

        1. 3rd Party CC Fraud Model: a fraud model has been created that detects 3rd party credit card fraud. In order to make sure that the MLC knows where to enable this model, a “Model Service tag” such as “cc-fraud” would be added to the Model upon Snapshotting.

        2. HELOC Credit Risk Model: a Credit Risk model for a home equity line of credit has been created. a “heloc-credit-line” tag should be added to this model to ensure that the MLC knows in which environments the model should be deployed throughout the model’s life cycle.

 

Model Operationalization is automated in ModelOp Center through the use of model life cycles (MLC’s). ModelOp Center provides a set of MLC’s out of the box, but enterprises can customize their own MLC’s and upload them into ModelOp Center to orchestrate their specific pathways to production and on-going management. The following articles provide details on how to Operationalize models in both online and batch mode within ModelOp Center.

Next Article: Operationalize a Model - REST >