Example Workflows - Basic Deployment

This article describes how ModelOp Center’s MLC’s can be used to orchestrate and automate a basic deployment, including running a back test and batch job.

Table of Contents

Deploy with run test and Jira MLC Process

Deploy with run test and Jira is an MLC Process that incorporates several of the patterns described earlier in a single MLC Process for managing the creation of a model, or changes to an existing model.

 

  1. Model Submitted - a deployable model object has been created, which is a snapshot in time of the model with the typical goal of moving the model into production. Clicking “Create New Snapshot” in the Model Details page triggers the start of this MLC process.

  2. Training - based on a metadata flag, a Training Job can be initiated to train the model. A service task automatically polls checking for the Training Job to finish before proceeding.

  3. Testing - based on a metadata flag “run_test” (read from a snapshot level tag), an automated, reproducible Metrics Job executes, and then the results are persisted in ModelOp Center. If the test process fails a Jira ticket will be created with failure information and another run test attempt will be made if Jira ticket is moved to Done.

  4. Approval Based on Test Results - based on a metadata flag “jira” (read from a snapshot level tag), the previous test results are analyzed if a DMN file is associated with the model. The details of the model, including all of the core information about the model, the changes to the model and the test results, are passed on to the reviewer on the Jira ticket.

  5. Add Missing Monitors - Before model is deployed, monitors are added on the snapshot based on model’s methodology. If the methodology is Regression “Performance Monitor: Regression,Data Drift Monitor: Comprehensive Analysis,Concept Drift Monitor: Comprehensive Analysis,Stability Monitor: PSI/CSI” OOTB monitors are added to the snapshot. If methodology is Classification “Performance Monitor: Classification,Data Drift Monitor: Comprehensive Analysis,Concept Drift Monitor: Comprehensive Analysis,Stability Monitor: PSI/CSI” OOTB monitors are added to the snapshot.

  6. Model Deployment - the MLC process receives a list of ModelOp Runtimes if the matching runtime has no endpoints defined, then the model will be deployed as batch leaving it ready for scoring on the matching runtime at the time of batch execution otherwise it will be an Online deployment.

  7. Error handling - when models are rejected, errors occur running the tests, or the model fails to deploy, the process creates Jira tasks to review the reasons for failure so they can take the appropriate actions.

    1. A rejection Jira ticket is created with the details for the reviewer if:

      1. We intended to run a Metrics Test Job (step 3) the model is missing test data

      2. The job execution failed

      3. The analyzed test results didn’t pass the DMN criteria

      4. The generated Jira review was rejected

    2. An error Jira ticket is created if other general errors are found during the process of deployment.

  8. Update Expiration Date- when “jira” created for test results is moved to “Done”, expiration date on the snapshot is updated by value of Expiration Date “jira” field. If the Expiration Date field is not present in the “jira” project then snapshot’s expiration date is updated to next year.

  9. Add Schedule- After the model is deployed if there are any schedules present on the previous deployed snapshot are copied to the newly deployed snapshot of the same model.

Run Back Test MLC Process

The Run Back Test is a simple monitor that runs a test against a new set of labeled data for a given model.

 

  1. Start Event - a triggered signal event initiates the monitor. This signal (com.modelop.mlc.definitions.Signals_Run_Back_Test_Jira / com.modelop.mlc.definitions.Signals_Run_Back_Test_ServiceNow) can be triggered by a rest API providing the variables used during the process.

  2. Get model - based on the MODEL_ID signal variable, the process will fetch the snapshot

  3. Get data - Based on the variables provided using the signal, the input/output will be decided. If the signal has INPUT_FILE the process will use as input to the job or find an asset with TEST_DATA role on the model. Similarly, if OUTPUT_FILE is provided with the signal, it will be used for storing the job’s output. Otherwise, it will create an embedded output file and use it for the job.

  4. Run and Analyze test - runs a Metrics Test batch job to evaluate the model with the new data. Based on the results of the test, if the model has an associated DMN file, it will be used to determine success criteria.

  5. Test Passed - generates a notification stating that the test passed

  6. Test Failed - if test fails a Jira/ServiceNow ticket is created. The details of the model, including all of the core information about the model, the changes to the model and the test results with failure, are passed on to the reviewer on the Jira/Servicenow ticket.

  7. Error Handling - The following are the scenarios if any error or exception occurs in the process

a. If there is an exception while running a test job a Jira/Servicenow review ticket is created and all the exception details are passed on to the reviewer.

b. If there is any exception or error occurs during the process a notification is generated with a failure reason to notify the user.

For more details on how to run this MLC see the article on triggering metrics tests.

Run Batch Model Job MLC Process

Run Batch Model Job is an MLC process that will trigger the execution of a scoring job on a batch deployed model. In order for this MLC to pick up correctly, a model must have been deployed as batch.

 

  1. Start event - a triggered signal event initiates the execution. This signal (com.modelop.mlc.definitions.Signals_DEPLOYED_BATCH_JOB) should contain a variable with the TAG of the model (model service tag) to run.

  2. Get Deployed Model - based on the tag given in the input signal, the process finds the most recent batch deployed model in “deployed” (active) state using the MODEL_STAGE (if provided) to match the deployment target of this execution.

  3. Set inputs and outputs in order - Creates the input and output parameters from the provided signal variables.

  4. Get compatible runtime - Finds the matching target runtime to run this scoring job.

  5. Run job - runs the Scoring batch job to obtain model’s inference given the data.

  6. Error handling - if certain conditions are not met an error is raised and Jira notification will be created.

 

Next Article: Example Workflows - Annual Review Process >