ModelOp Center Terminology

This article lists the common ModelOps terminology that is used within the industry and ModelOp Center software.


List of Terminology

ModelOp Center is the the leading enterprise-grade ModelOps software, helping large companies organize their enterprise AI efforts.




The art of replacing specific details about a model with generic ones.

Artificial intelligence (AI)

A computer engineering discipline using mathematical or logic-based techniques to uncover, capture, or code knowledge and sophisticated techniques to arrive at inferences or predictions to solve business problems.


Any individual component that is used and required during a model’s life cycle, such as model source code, schemas, dependencies, serialized objects, training artifacts, etc.

Associated Model

A model that is linked to a given base (reference) model. An associated model may be a particular monitoring model, or it could be another model as part of an ensemble use case.

Data drift

The evolution of data over time, potentially introducing previously unseen variety and/or new categories of data that deviates from a baseline data, which is often the training data set.

Deployment (aka Productionization or Operationalization)

The process of making a model available for use by the business.

Enterprise AI

Enterprise AI encompasses the end-to-end business processes by which organizations incorporate AI into 24x7 business functions that are accountable, manageable and governable at enterprise scale.


The management and mitigation of model risk to provide full transparency and auditability of all models across the enterprise.


Descriptions of the relationship between the independent variables and the outcomes in a data set.


The ability of a human to retrace how a model generates its inferences or predictions.


All human and system interactions (code changes, testing, promotions, approvals, etc) that have occurred throughout a model’s entire life cycle.

Machine learning (ML)

A subset of AI that uses algorithms to parse data, capture knowledge, and develop predictions or determinations. ML models are first trained on data sets; then, once in production, use a closed-loop process to “learn” from experience and improve the accuracy of their predictions or determinations. Some ML models are both complex and opaque, making it difficult to explain how the models arrive at specific predictions or determinations.


A set of code that represents functions, actions, and predictions important to the business.

Model debt

The implied cost of undeployed models and/or models deployed without proper monitoring and governance.

Model decay

A change in model performance that makes it less accurate in its inferences or predictions.

Model life cycle (MLC)

A model's journey from creation through testing, deployment, monitoring, iteration, and ultimately retirement.


The key strategic capability for operationalizing enterprise AI. ModelOps encompasses the systems and processes that streamline the orchestration, monitoring, governance, and continuous improvement of data science models, but its fundamental role is to improve business results.


The act of observing statistical, technical, and ethical aspects of a model's performance in operation.


Descriptions of the relationship between the independent variables and the outcomes in a data set which are used to estimate outcomes for new data points.

Reference (Base) Model

Typically the primary (base) model that is providing business-decisioning. The Reference model may have 1 or more associated models that refer to the Reference model.


The definition of a model’s expected data inputs or outputs expressed in a standard way.

Shadow AI

The implied cost and risk of deployment of AI initiatives and models in production with no accountability to IT or governance organizations. It is expected to be the biggest risk to effective and ethical decision.


Tuning model parameters to optimize performance on a particular data set, with the typical output being a trained model artifact.


Model-Specific Metadata:

For a given Model registered in ModelOp Center, ModelOp Center stores a substantial amount of metadata about the model. The details can be found in this article.

Specifically within the ModelOp Center model metadata, there are a few elements that do not have industry standard terminology. Therefore, the purpose of these elements within the ModelOp Center platform have been defined here:






The user-supplied name of the model


The user-supplied description of the model


The LDAP group to which the model is registered


The name of the “organization” or “team” that owns the model.




The “class” or “type” of model, such as “credit”, “fraud”, “marketing”, etc.


The methodology used for the model (e.g. OLS, WLS, Arima, CNN, etc)


The risk classification of the model, such as “high risk”, “medium risk”, or “low risk”.



Next Article: Getting Oriented with ModelOp Center's Command Center>