Champion/Challenger Model Comparison
This section describes the Champion/Challenger feature in ModelOp Center, and how to use it to help determine the best model method or approach to promote to production.
Table of Contents
Â
Introduction
Data Science requires experimentation using a variety of methods, approaches, frameworks, and potentially even coding languages. While most data scientists will conduct their experimentation in their model development environment, data scientists--and the managerial and governance reviewers--often want to review that a candidate model for production performs better than any currently running version of the model. The Champion/Challenger feature in ModelOp Center presents a side-by side comparison of the performance of different models--or versions of the same model--to help users analyze which model or model version is best suited for production usage.
The following phases lay the groundwork for doing a Champion/Challenger comparison.
Define the evaluation metrics for the model. See Model Monitoring: Overview.
Automate evaluation tests - this is done either by manually running metrics jobs (see Running a Metrics Job Manually)or through an automated MLC Process. You can build an MLC Process to automatically execute metrics tests for a particular version of a model. See Model Lifecycle Management Overview for more information.
Conduct a side-by-side comparison of the test results in the Champion/Challenger page of the Command Center (see next section).
Champion/Challenger Comparison
For the previously generated metrics results (see steps 1 & 2 above), use the following steps to activate the Champion/Challenger feature in the ModelOp Center to compare test results side-by-side.
In the Command Center, navigate to Models.
Choose the (two or more) models you would like to compare.
Â
Select the test results for each of the models
Â
You can view the metrics side-by-side to decide which model is performing better.
Â
Next Article: Model Monitoring: Overview >