Governance Score Administration
This article describes how ModelOp Center allows users to configure how Governance Scores are applied to Use Cases, Implementations, and Snapshots.
Table of Contents
Introduction
Executives and governance officers within an enterprise need a consistent way to ensure that all AI/ML use cases and models are adhering to the governance policy, regardless of the technology used, model architecture, or environment in which it runs. The ModelOp Center “Governance Score” provides this consistent “apples to apples” comparison of adherence to the Governance policy across all internally developed models, vendor models, embedded AI, etc.
Governance Score Overview
The ModelOp Governance Score is a standardized metric to measure adherence to AI governance policies for all AI initiatives, regardless of whether an organization is using generative AI, in-house, third-party vendor, or embedded AI systems. The AI Governance Score works across all use cases, implementations, and snapshots (version of a given implementation), incorporates the following elements:
Information/Metadata: collection of all required information and metadata related to the AI use case or implementation
Assets: source code, binary artifacts, configurations, execution details, etc.
Evidence: continuous collection of evidence (tests, job completion, documentation, reports, etc.)
Other Controls: attestations, approvals, change controls, process controls, data controls
Governance Score Calculation Details
The Governance Score is automatically calculated for a given use case, implementation(s), and relevant snapshot(s) based on the criteria defined in the Governance Score Administration page.
ModelOp Center calculates an individual governance score for each implementation, snapshot, and the use case, respectively
For Production models, the governance score for use case, implementation, and snapshot are rolled into an aggregate governance score. This is based on a straight linear completion of the requisite controls in the Governance Score
To see the details of which items in the Governance Score passed and which ones remain (“failed”), click on the “see all” link or click on the specific “passed” or “failed” portion of the donut chart.
Governance Completion Score Configuration
To configure the Model Governance Score:
Click on the “Scores Configuration” item in the main menu
By Default, ModelOp Center ships with four templates of Governance Scores:
Use Cases
Sagemaker models
Vendor models
Default models (all else)
Click on an existing Score template OR click the “Add Type” button on the left hand side
Within the resulting Scoring Criteria UI, a user may configure:
Basic information: typically metadata such as “Model Methodology” is required
Governance Form Complete: identifies if all REQUIRED fields in a given custom form are factored into the governance score
Approvals: the specific approvals that are required
Click “Add Approval”
Note that the Approval Type is required in order to properly identify whether it is a Security approval vs. Validation approval, etc.
Snapshots (Implementations only)
Assets: the required assets per the governance policy. These are defined based on the Asset Role type
Documentation: the required documentation per the governance policy. These are defined based on the Documentation Role type
Snapshot Approvals: the specific approvals that are required for each Snapshot