Use Cases

This article describes how to use the ModelOp UI to add and manage Use Cases, which are the central focus area within ModelOp Center to enforce AI Governance across the enterprise.

Table of Contents

Add a Use Case

1.Select “Inventory” from the main menu

2.Click “Add Use Case” from the top-right

image-20240712-230221.png
  1. Fill in the requisite information, noting the required fields have an asterisk.

  2. Hit the “Next” button

 

image-20240712-230343.png
  1. If your organization has specified additional fields that need to be collected for a Use Case, an additional set of inputs will be requested.

    1. Note that these are optional, and can be filled in later

  2. Hit the “Next” button

 

  1. Review the Submission details

  2. Once satisfied, click the “Submit” button

  3. Upon successful submission , a dialog will appear asking if you would like to add an implementation as well. 

    1. Choose Yes if you already have a model that you want to add as an implementation to the Use Case

 

  1. Congratulations, you’ve added your first Use Case!

Manage a Use Case

The Use Case page is the central focus for Model Owners, Governance Officers, and other users that are overseeing a given Use Case from a governance perspective. The subsequent sections will go through the core areas of the Use Case page.

Open Items

The Open Items section contains the list of all risks, issues, and/or tasks that are applicable for the use case. This can include items related to the Use Case or any Implementation(s) for this use case. These are the main items for attention for the Model Owners, Governance Officers, and other related users. A user may click to see more details of a given open item, resolve the open item, or add a new risk related to the Use Case.

Note that the open items can be ModelOp-managed risks or items from external systems such as Jira. If it is an external system, the link to the specific ticket is provided.

Governance Score

The ModelOp Governance Score is a standardized metric to measure adherence to AI governance policies for all AI initiatives, regardless of whether an organization is using generative AI, in-house, third-party vendor, or embedded AI systems. The AI Governance Score works across all use cases, implementations, and snapshots (version of a given implementation), incorporates the following elements:

  • Information/Metadata: collection of all required information and metadata related to the AI use case or implementation

  • Assets: source code, binary artifacts, configurations, execution details, etc.

  • Evidence: continuous collection of evidence (tests, job completion, documentation, reports, etc.)

  • Other Controls: attestations, approvals, change controls, process controls, data controls

Governance Score Calculation Details

The Governance Score is automatically calculated for a given use case, implementation(s), and relevant snapshot(s) based on the criteria defined in the :

  • ModelOp calculates an individual governance score for each implementation, snapshot, and the use case, respectively

  • For Production models, the governance score for use case, implementation, and snapshot are rolled into an aggregate governance score. This is based on a straight linear completion of the requisite controls in the Governance Score

  • To see the details of which items in the Governance Score passed and which ones remain (“failed”), click on the “see all” link or click on the specific “passed” or “failed” portion of the donut chart.

 

 

Metrics

The metrics section provides insight into the health and safety of a given Use Case by displaying metrics about the Use Case over time. Using this component, a user can:

  1. Understand the current state of health and performance

  2. Visualize where the use case may be deviating from expectations

  3. Compare metrics across snapshots (versions)

  4. Drill-into specific dates where the metrics show concern

Metric Types

ModelOp Center offers a variety of metrics spanning across traditional regressions, classifiers, machine-learning, NLP, and LLM:

  • For traditional/ML models (regressions, classifiers, etc), metrics include: Performance, Stability, Fairness, Drift, Normality, Linearity, Autocorrelation, and several others.

  • For NLP, metrics include:

    • PII Detection

    • Sentiment Analysis

    • Top Words by Parts of Speech

    • SBERT Similarity

  • For LLM, metrics include:

    • Prompt File Validation

    • PII Detection

    • Sentiment Analysis

    • Top Words by Parts of Speech

    • Fact checking

    • Accuracy Assessment

    • SBERT Similarity

    • Rails Compliance Validation

    • Bias Detection in Responses

See the section for more details on the metrics and how to run testing and monitoring for your implementations.

 

Note that:

  • This section contains metrics that have already been computed for a given snapshot or multiple snapshots

  • The date range is based on the “firstPredictionDate” and “lastPredictionDate” that are contained in the model test result.

    • To populate metrics in this section, please include these fields in the model test result

    • Example within a model test result: { "firstPredictionDate": "2023-02-27T20:10:20", "lastPredictionDate": "2023-03-04T20:10:20"}

  • The metrics contained in this section require a datetime in the model test result in order for the ModelOp UI to graph the metrics over time. See the section for details on the structure

  • To refresh the metrics or compute different metrics, go to the specific snapshot of a given implementation. See the section for more details on the metrics and how to run testing and monitoring for your implementations.

Overview

The Overview section contains information and metadata about the Use Case to capture the core inventory information about the Use Case. There are three sections:

  1. Basic Information: this includes the standard ModelOp fields about a use case, such as: Name of the Use Case, Description, Owning Group, Risk Tier, Organization, etc.

  2. Additional Information: this section contains the Customer-specific information that the Company has determined should be collected for a given Use Case. The specific fields can be configured in the ModelOp . These custom forms are configured by an Administrator and automatically are applied to all Use Cases. Note that this section contains a variety of different field types: text input, drop-downs, radio buttons, etc. All of these are configurable by an Administrator.

  3. Detailed Metadata: this advanced section contains any technical or other metadata, typically Customer-specific metadata beyond what is in the Additional Information custom forms.

 

1. Basic Information:

2. Additional Information:

3. Detailed Metadata:

Implementations

The Implementations section contains all of the implementation(s) that are currently associated with this Use Case. Again, the implementation (model) is the technical approach to address the Use Case, and thus, there can multiple models that solve the same business problem (Use Case). Therefore, ModelOp helps track all of the various model implementations that have been applied to a Use Case.

View an Implementation/Model

To see more details of the Implementations or Snapshots, simply click on the item:

Documentation

The documentation section contains the listing of all documents related to a Use Case, which includes all documents in the Use Case, any implementation, and any snapshot associated to the Use Case.

To view Documentation Details:

Click on a given document item in the list to view more details of the Document:

In this view, the user may change the role of the document, or open the link to the document itself.

Approvals

The Approvals section contains the listing of all Approvals related to a Use Case, which includes all approvals in the Use Case, any implementation, and any snapshot associated to the Use Case. Each approval item contains details of the approval: Name/Description, Status, and links to the Approval Item (typically an external ticketing system such as Jira or ServiceNOW).

Notifications

The Notifications section contains the listing of all Notifications related to a Use Case, which includes all Notifications in the Use Case, any implementation, and any snapshot associated to the Use Case. Each Notifications item contains details of the approval: Name/Description, Status, and links to the Notification details.

Reporting

While there are multiple mechanisms to create reports in ModelOp Center, for a given Use Case, ModelOp leverages the Model Card approach to create a quick summary of the Use Case. The Model Card is an industry standard format, created by Google and more recently leveraged heavily for GenAI models. ModelOp Center uses a Model Card template and automatically populates the template with the relevant Use Case details, test results, and other metadata that are currently known about the Use Case. ModelOp Center can generate the Model Cards automatically through MLC’s or a user may choose to generate a Model Card manually via the “Generate Model Card” button.

ModelOp Center provides a Model Card template, that uses the Hugging Face model card standard, but has extended it to include the Use Case’s additional metadata, test results, documentations, and assets that are managed by ModelOp Center.

 

Next Article: >