Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

This article lists the common ModelOps terminology that is used within the industry and ModelOp Center software.

List of Terminology

ModelOp Center is the the only enterprise-grade ModelOps software, helping large companies organize their enterprise AI efforts.

Term

Definition

Abstraction

The art of replacing specific details about a model with generic ones.

Artificial intelligence (AI)

A computer engineering discipline using mathematical or logic-based techniques to uncover, capture, or code knowledge and sophisticated techniques to arrive at inferences or predictions to solve business problems.

Asset

Any individual component that is used and required during model deployment, such as model source code, schemas, dependencies, serialized objects, training artifacts, etc.

Data drift

The evolution of data over time, potentially introducing previously unseen variety and/or new categories of data.

Deployment (aka Productionization)

The process of making a model available for use by the business.

Enterprise AI

Enterprise AI encompasses the end-to-end business processes by which organizations incorporate AI into 24x7 business functions that are accountable, manageable and governable at enterprise scale.

Governance

The management and mitigation of model risk to provide full transparency and auditability of all models across the enterprise.

Inferences

Descriptions of the relationship between the independent variables and the outcomes in a data set.

Interpretability

The ability of a human to retrace how a model generates its inferences or predictions.

Lineage

All human and system interactions (code changes, testing, promotions, approvals, etc) that have occurred throughout a model’s entire life cycle.

Machine learning (ML)

A subset of AI that uses algorithms to parse data, capture knowledge, and develop predictions or determinations. ML models are first trained on data sets; then, once in production, use a closed-loop process to “learn” from experience and improve the accuracy of their predictions or determinations. Some ML models are both complex and opaque, making it difficult to explain how the models arrive at specific predictions or determinations.

Model

A set of code that represents functions, actions, and predictions important to the business.

Model debt

The implied cost of undeployed models and/or models deployed without proper monitoring and governance.

Model decay

A change in model performance that makes it less accurate in its inferences or predictions.

Model life cycle (MLC)

A model's journey from creation through testing, deployment, monitoring, iteration, and retirement.

ModelOps

The key strategic capability for operationalizing enterprise AI. ModelOps encompasses the systems and processes that streamline the deployment, monitoring, governance, and continuous improvement of data science models, but its fundamental role is to improve business results.

Monitoring

The act of observing statistical, technical, and ethical aspects of a model's performance in operation.

Predictions

Descriptions of the relationship between the independent variables and the outcomes in a data set which are used to estimate outcomes for new data points.

Schema

The definition of a model’s expected data inputs or outputs expressed in a standard way.

Shadow AI

The implied cost and risk of deployment of AI initiatives and models in production with no accountability to IT or governance organizations. It is expected to be the biggest risk to effective and ethical decision.

Training

Tuning model parameters to optimize performance on a particular data set, with the typical output being a trained model artifact.

Next Article: Getting Oriented with ModelOp Center's Command Center>