Ethical Fairness Monitoring
This article describes how ModelOp Center enables ongoing Ethical Bias/Fairness Monitoring.
Table of Contents
Introduction
Organizations need visibility into how models are forming predictions, in particular, if the model is generating unfair and partial results to certain protected classes. Bias monitors should be run routinely against batches of labeled and scored data, to ensure that the model is performing within specification. If the production bias metrics deviate beyond/below set thresholds, then the appropriate alerts are raised for the data scientist or ModelOps engineer to investigate.
ModelOp Center provides bias monitors out-of-the-box (OOTB) but also allows you to write your own custom bias/fairness/group metrics to monitor your model. The subsequent sections describe how to add a bias monitor - assuming an OOTB monitor - and the detailed makeup of a bias monitor.
Adding Bias Monitors
As background on the terminology and concepts used below, please read the Monitoring Concepts section of the Model overview documentation.
To add bias monitoring to your business model, you will add an existing “Monitor” to a snapshot (deployable model) of the business model under consideration. Below are the steps to accomplish this. For tutorial purposes, these instructions use all out-of-the-box and publicly available content provided by ModelOp, focusing on the German Credit Model and its related assets.
The "German Credit Data" dataset classifies people described by a set of attributes as good or bad credit risks. Among the twenty attributes is gender
(reported as a hybrid status_sex
attribute), which is considered a protected attribute in most financial risk models. It is therefore of the utmost importance that any machine learning model aiming to assign risk levels to lessees is not biased against a particular gender.
It is important to note that simply excluding gender from the training step does not guarantee an unbiased model, as gender could be highly correlated to other unprotected attributes, such as annual income.
Open-source Python libraries developed to address the problem of Bias and Fairness in AI are available. Among these, Aequitas can be easily leveraged to calculate Bias and Fairness metrics of a particular ML model, given a labeled and scored data set, as well as a set of protected attributes. In the case of German Credit Data, ground truths are provided and predictions can be generated by, say, a logistic regression model. Scores (predictions), label values (ground truths), and protected attributes (e.g. gender) can then be given as inputs to the Aequitas library. Aequitas's Group() calculates commonly used metrics such as false-positive rate (FPR) and false omission rate (FOR), as well as counts by group and group prevalence among the sample population. It returns group counts and group value bias metrics in a DataFrame.
For instance, one could discover that under the trained logistic regression model, females have an FPR=32%, whereas males have an FPR=16%. This means that women are twice as likely to be falsely labeled as high-risk as men. The Aequitas Bias() class calculates disparities between groups, where a disparity is a ratio of a metric for a group of interest compared to a reference group. For example, the FPR-Disparity for the example above between males and females, where males are the reference group, is equal to 32/16 = 2. Disparities are computed for each bias metric and are returned by Aequitas in a DataFrame.
Associate a Monitor to a Snapshot of a Business Model
In MOC, navigate to the business model to be monitored. In our case, that’s the German Credit Model.
Navigate to the specific snapshot of the business model. If no snapshots exist, create one.
On the Monitors widget click on
+ Add
Search for (or select) the
Bias Monitor: Disparity and Group Metrics
from the list of OOTB monitors.Select a snapshot of the monitor. By default, a snapshot is created for each OOTB monitor.
On the
Input Assets
page, you’ll notice that the only asset that is required is sample data. This is because a bias monitor computes metrics on 1 dataset only, and thus does not do a comparison to a baseline/reference dataset. For our example, selectdf_sample_scored.json
as theSample Data Asset
. Since the file is already an asset of the business model, we can find it underSelect Existing
.On the
Threshold
page, click onADD A THRESHOLD
, then select the.dmn
filebias_disparity_DMN.dmn
. Since the file is already an asset of the business model, we can find it underSelect Existing
. If the business model does not have a.dmn
asset, the user may upload on from a local directory during the monitor association process. More on thresholds and decision tables in the next section.The last step in adding a monitor is adding an optional schedule. To do so, click on
ADD A SCHEDULE
. TheSchedule Name
field is free-form. TheSignal Name
field is a dropdown. Choose a signal that corresponds to your ticketing system (Jira, ServiceNow). Lastly, set the frequency of the monitoring job. This can be done either by the wizard or by entering a cron expression.On the
Review
page clickSAVE
.
Define thresholds for your model
As mentioned in the Monitoring Concepts article, ModelOp Center uses decision tables to define the thresholds within which the model should operate for the given monitor.
The first step is to define these thresholds. For this tutorial, we will leverage the example
bias_disparity_DMN.dmn
decision table. Specifically, this decision table ensures that thegender_female_statistical_parity
andgender_female_impact_parity
metrics of the German Credit Model are within specification.The
gender_female_statistical_parity
andgender_female_impact_parity
values can be accessed directly from the Monitoring Test Result by design. More metrics are produced OOTB by the bias monitor. We will discuss this in more detail later.In our example, the
.dmn
file is already an asset of the business model and versioned/managed along with the source code in the same Github repo. This is considered best practice, as the decision tables are closely tied to the specific business model under consideration. However, it is not a requirement that the.dmn
files are available as model assets ahead of time.
Monitoring Results and Notifications
Sample Standard Output of Performance Monitors
The output of the performance monitoring job can be viewed by clicking on the monitor from “Model Test Results” as shown in the previous section. In this section, you can view the results in a graphical format or in the raw format.
Bias/Disparity Metrics
Raw JSON:
{ "bias": [ { "test_name": "Aequitas Bias", "test_category": "bias", "test_type": "bias", "protected_class": "gender", "test_id": "bias_bias_gender", "reference_group": "male", "thresholds": { "min": 0.8, "max": 1.25 }, "values": [ { "attribute_name": "gender", "attribute_value": "female", "ppr_disparity": 0.5, "pprev_disparity": 0.8889, "precision_disparity": 1.36, "fdr_disparity": 0.7568, "for_disparity": 1.6098, "fpr_disparity": 0.7648, "fnr_disparity": 1.32, "tpr_disparity": 0.8976, "tnr_disparity": 1.15, "npv_disparity": 0.9159 }, { "attribute_name": "gender", "attribute_value": "male", "ppr_disparity": 1.0, "pprev_disparity": 1.0, "precision_disparity": 1.0, "fdr_disparity": 1.0, "for_disparity": 1.0, "fpr_disparity": 1.0, "fnr_disparity": 1.0, "tpr_disparity": 1.0, "tnr_disparity": 1.0, "npv_disparity": 1.0 } ] } ], "gender_female_impact_parity": 0.8889, "gender_female_statistical_parity": 0.5, "gender_male_impact_parity": 1.0, "gender_male_statistical_parity": 1.0 }
Group Metrics
Raw JSON:
{ "bias": [ { "test_name": "Aequitas Group", "test_category": "bias", "test_type": "group", "protected_class": "gender", "test_id": "bias_group_gender", "reference_group": null, "values": [ { "attribute_name": "gender", "attribute_value": "female", "tpr": 0.68, "tnr": 0.7021, "for": 0.1951, "fdr": 0.4516, "fpr": 0.2979, "fnr": 0.32, "npv": 0.8049, "precision": 0.5484, "ppr": 0.3333, "pprev": 0.4306, "prev": 0.3472 }, { "attribute_name": "gender", "attribute_value": "male", "tpr": 0.7576, "tnr": 0.6105, "for": 0.1212, "fdr": 0.5968, "fpr": 0.3895, "fnr": 0.2424, "npv": 0.8788, "precision": 0.4032, "ppr": 0.6667, "pprev": 0.4844, "prev": 0.2578 } ] } ] }
Bias Monitors Details
Model Assumptions
Business Models considered for bias monitoring have a few requirements:
An extended schema asset for the input data.
Model type is classification.
Protected classes under consideration are categorical features.
Input data contains columns for
label
(ground truth),score
(model output), and at least 1 protected class.
Model Execution
During execution, bias monitors execute the following steps:
init
function extracts extended input schema (corresponding to theBUSINESS_MODEL
being monitored) from job JSON.Monitoring parameters are set based on the schema above.
protected_classes
,label_column
, andscore_column
are determined accordingly.metrics
function runs an Aequitas Bias test and/or an Aequitas Group test for each protected class in the list of protected classes. A reference group for each protected class is chosen by default (first occurrence).The combination of bias and group metrics to be computed depends on the specific flavor of the bias monitor:
Bias Monitor: Group Metrics computes group metrics only for each protected class:
tpr
,tnr
,for
,fdr
,fpr
,fnr
,npv
,precision
,ppr
,pprev
,prev
Bias Monitor: Disparity Metrics computes disparity metrics only for each protected class:
ppr_disparity
,pprev_disparity
,precision_disparity
,fdr_disparity
,for_disparity
,fpr_disparity
,fnr_disparity
,tpr_disparity
,tnr_disparity
,npv_disparity
Bias Monitor: Disparity and Group Metrics computes both group and disparity metrics for each protected class
Test results are appended to the list of
bias
tests to be returned by the model.
For a deeper look at OOTB bias monitors, see the GitHub READMEs:
Next Article: Model Governance: Standard Model Definition >