Operational Monitoring
This article describes how to use the ModelOp Command Center to enable operational monitoring, focused on ensuring that models are available and running within SLA’s on the target runtimes.
Table of Contents
Â
Introduction
Operational performance monitors include:
Runtime Monitoring
Model In-Line Data Integrity Monitoring
Runtime Monitoring (ModelOp Runtime only)
To get real-time insight into how your model is performing, you can click into a detailed, real-time view of the Runtime information for the deployed model. This includes real-time monitors about the infrastructure, data throughput, model logs, and lineage, where available.
To see the Runtime Monitoring, navigate to the deployed model: Runtimes → Runtime Dashboard → <Runtime where your model is deployed>
The Runtime monitor displays the following information about the Runtime environment:
CPU Utilization - User CPU utilization and Kernel CPU usage
System Resource Usage - real-time memory usage
Lineage of the deployment - MLC Process metadata that details the deployment information and history
Logs - A live scroll of the model logs
Â
Model Data Monitoring
While not required, ModelOp Center provides its own runtime out of the box, which has the capability to validate incoming and outgoing data from the model for adherence to a defined schema. This schema is a defined structure that the model expects to ensure that erroneous data is not accidentally processed by the model causing model stability errors or downtime.
Overview
ModelOp Center enforces strict typing of engine inputs and outputs. Types are declared using AVRO schema.
When incoming data is received, the data is checked against the input schema.
When an output is produced by the model, the emitted data is checked against the output schema.
Input or output records that are rejected due to schema incompatibility appear as messages in the ModelOp runtime logs.
Examples
The following model takes in a record with three fields (name
, x,
 and y
), and returns the product of the two numbers.
# modelop.schema.0: input_schema.avsc
# modelop.schema.1: output_schema.avsc
def action(datum):
my_name = datum['name']
x = datum['x']
y = datum['y']
yield {'name': my_name, 'product':x*y}
The corresponding input and output AVRO schema are:
{
"type":"record",
"name":"input",
"fields": [
{"name":"name", "type":"string"},
{"name":"x", "type":"double"},
{"name":"y", "type":"double"}
]
}
and
{
"type":"record",
"name":"output",
"fields": [
{"name":"name", "type":"string"},
{"name":"product", "type":"double"}
]
}
So, for example, this model may take as input the JSON record
and score this record to produce
Note that in both the model’s smart comments, the CLI commands, and the stream descriptor schema references, the schemas are referenced by their name in model manage, not the filename or any other property.
Â
Next Article: Drift Monitoring >