Introduction
Executives and governance officers within an enterprise need a consistent way to ensure that all AI/ML use cases and models are adhering to the governance policy, regardless of the technology used, model architecture, or environment in which it runs. The ModelOp Center “Governance Score” provides this consistent “apples to apples” comparison of adherence to the Governance policy across all internally developed models, vendor models, embedded AI, etc.
Governance Score Overview
The ModelOp Governance Score is a standardized metric to measure adherence to AI governance policies for all AI initiatives, regardless of whether an organization is using generative AI, in-house, third-party vendor, or embedded AI systems. The AI Governance Score works across all use cases, implementations, and snapshots (version of a given implementation), incorporates the following elements:
Information/Metadata: collection of all required information and metadata related to the AI use case or implementation
Assets: source code, binary artifacts, configurations, execution details, etc.
Evidence: continuous collection of evidence (tests, job completion, documentation, reports, etc.)
Other Controls: attestations, approvals, change controls, process controls, data controls
Governance Score Calculation Details
The Governance Score is automatically calculated for a given use case, implementation(s), and relevant snapshot(s) based on the criteria defined in the Governance Score Administration page.
ModelOp Center calculates an individual governance score for each implementation, snapshot, and the use case, respectively
For Production models, the governance score for use case, implementation, and snapshot are rolled into an aggregate governance score. This is based on a straight linear completion of the requisite controls in the Governance Score
To see the details of which items in the Governance Score passed and which ones remain (“failed”), click on the “see all” link or click on the specific “passed” or “failed” portion of the donut chart.
Governance Completion Score Configuration
To configure the Model Governance Score:
Click on the “Scores Configuration” item in the main menu
By Default, ModelOp Center ships with four templates of Governance Scores:
Use Cases
Sagemaker models
Vendor models
Default models (all else)
Click on an existing Score template OR click the “Add Type” button on the left hand side
Within the resulting Scoring Criteria UI, a user may configure:
Basic information: typically metadata such as “Model Methodology” is required
Governance Form Complete: identifies if all REQUIRED fields in a given custom form are factored into the governance score
Approvals: the specific approvals that are required
Click “Add Approval”
Note that the Approval Type is required in order to properly identify whether it is a Security approval vs. Validation approval, etc.
Snapshots (Implementations only)
Assets: the required assets per the governance policy. These are defined based on the Asset Role type
Documentation: the required documentation per the governance policy. These are defined based on the Documentation Role type
Snapshot Approvals: the specific approvals that are required for each Snapshot
Advanced Configuration
Some core components of the scoring infrastructure are defined as JSON resources in a configurable location. Such resources can be extended or overridden entirely in order to support more advanced customizations. What follows is a high level overview of the scoring process to contextualize the purpose of these resources and how they are used.
Technical Overview of the Scoring Process
Input: a model implementation or use case to be scored
The scoring API requires the model ID to be provided via the
storedModelId
ordeployableModelId
query parameter.
Rule selection: a set of rules are prepared based on some selection criteria
Root type: rule definitions declare compatibility via the
rootType
field. This must be either"rootType": "STORED_MODEL"
for Stored Model rules, or"rootType": "DEPLOYABLE_MODEL"
for Deployable Model rules.Rule definitions are loaded from JSON resources at startup. Default rule definitions can be overridden or extended by configuring alternative resource locations. (more on this below)
Model Type: rules can be marked for inclusion or exclusion based on the input model's
modelType
This is configured via the Governance Completion Score admin interface. Model type inclusion criteria are persisted in the database.
Application Form: rules can be dynamically generated from an application form. The application form can be specified on a per-model basis by assigning the application form ID to a model's custom metadata field
mocApplicationFormId
.
Output: a pass/fail outcome for each selected rule on the given model
Instances of the selected rules are mapped to projection expressions and evaluated simultaneously as a MongoDB aggregation.
The percentage of rules satisfied by a model determines its Governance Completion Score. A model that scores 100% is fully compliant with all applicable governance policies. A non-perfect score includes details about each evaluated rule, their pass/fail statuses, and actionable failure reasons to provide guidance on how to achieve 100%.
Customizable Resources
Rule definitions
Rule definitions are specified in JSON files that are loaded at startup. The default rule set can be overridden or extended by specifying resource URLs that resolve to JSON files containing rule definitions.
These can be any kind of Resource URL supported by Spring Framework, including:
classpath URLs
classpath:rules/governance/rules.json
,classpath*:rules/governance/*.json
file system URLs
file:///home/me/rules/governance.json
HTTP URLs
http://rules.example/governance.json
The default bundled rule definitions are loaded from the classpath URL classpath*:/rules/governance/*.json
. If you wish to extend the bundled defaults without overriding them entirely, include classpath*:/rules/governance/*.json
in the list of configured rule definition resources.
An example configuration that retains the default rules and includes an additional rule definition file:
modelop: governance: rule-definitions: locations: - "classpath*:/rules/governance/*.json" # retain default rule definitions - "file:///home/me/custom-rules.json" # include additional rule definitions
Optional resources
Rule inclusion criteria are persisted as Model Assessment records and managed via the Governance Completion Score interface.
Default entries can be persisted from JSON at startup by specifying their locations:
modelop: governance: # Default model assessments loaded on start-up model-assessments: locations: - "classpath*:/governance/assessments/*.json" - "file:///home/me/assessments/my-assessments.json"
Dynamic rules are generated from persisted Application Form records managed via the Application Form interface.
Default entries can be persisted from JSON at startup by specifying their locations:
modelop: governance: # Default application forms loaded on start-up application-forms: locations: - "classpath*:/application-forms/*.json" - "file:///home/me/application-forms/my-app-forms.json"