This article describes how to use the ModelOp Command Center as the central repository for governing models, including how ModelOp Center provides a standard representation of a model regardless of the model factory from which it came, or the infrastructure upon which it will run.
Table of Contents
Standard Model Definition
ModelOp Center provides the most robust and extensible definition of a model to allow for consistent deployment, monitoring, and governance of all models across the enterprise.
Elements of the Standard Model Definition
The ModelOp Standard Model Definition includes all of the metadata, technical model details, version information, MLC’s, and test results related to a given model. This information is listed in the Model Details page within ModelOp Center:
“Edit Metadata” section: Displays the name of the model, the description, and any tags applied to the model.
Custom Metadata: Note that Custom Metadata can be added to a given model. Any metadata that adheres to valid JSON can be added/updated throughout the lifecycle of the model. The Custom Metadata will be displayed in the Model Details page. See the “Model Governance: Model Metdata” reference document for more details on adding/modifying custom metadata.
“Versions” section: Lists all versions of the model, their last modification date, current deployment status, related tests, and a URL to view the Version details
“Edit Functions” section: Defines the entry points into the model for initialization, training, metrics testing, and scoring. For example, the Init Function is invoked upon deployment of a model and the Score Function is invoked when scoring.
Provides details of all the source code assets, typically stored in a remote git repository.
The “Source” tab provides a view into the actual source code asset:
The “Metadata” tab lists all of the details of the source code management capability that is backing the source code assets, including remote repository URL / branch, last commit ID, and the repository type:
Defines the input and output schemas to which the input data and output scores must adhere as part of model scoring. The schemas use the well-adopted Avro standard to enable a contract between the data ingress / egress and the model code.
For more information on creating input and output schemas, see the Schema Management page.
Detailed list of all versions of the model, last modification date, related tests, and a URL to view the Version details.
Lists the dependencies used by the model, both system and framework-specific libraries/frameworks.
Note that the current libraries are captured via the ModelOp Center Jupyter plugin.
Attachments are other model artifacts used during the life cycle of a model, including items such as training model coefficients/binaries, documents, decision tables, test data references, or other items.
For each attachment, an “Asset Role” can be assigned (e.g. weights file, test data, readme), which can be leveraged to enable seamless usage of the attachment within a Model Life Cycle (MLC).
While ModelOp Center supports almost any model language, framework, and overall model factory, below is a sampling of some of the more common ones that are supported in ModelOp Center. Each of these are encoded in ModelOp Center’s standard model definition