V3.2 Release Notes
What’s New:
ModelOp Center v3.2 introduces support for Generative AI models, including Large Language Models (LLM’s), providing comprehensive AI governance for these transformational models. Additionally, v3.2 introduces support fine-grained access controls, allowing for setting read/write/execute permissions at the model, snapshot, notification, job, and test result level across groups.
Governance & Security:
Generative AI Inventory: added support for classifying models as Generative AI (including LLM’s) as well as creating Generative AI ensembles, including managing langchain models, prompt templates, embedding models, LLM’s, and validation code (guardrails)
Generative AI Asset Management: added support for prompt templates, RAILS, etc. for Generative AI asset tracking
Granular Entity-Level Security: extending the current group-based isolation security model to provide fine-grain access control at a given entity-level (e.g. Snapshot, Job), such that read/write/execute privileges can be set at an individual entity level, as desired.
Google Cloud Storage Buckets: added support for managing technical artifacts (model binaries/weights, etc.) in Google Cloud Storage Buckets
Test, Monitor, & Visualize
REST-based Data Set Support: added support to pull model-specific data sets (e.g. training data, production data) via REST, allowing integration with existing REST-based data management systems
Testing/Monitoring Updates:
Added support for Rank Order Break
Added support for performance metrics for Probability of Default for Credit Models
Added support for longitudinal tracking of metrics
External Monitors: added support for collecting existing metrics for a model that are calculated from external monitors, with the ability to automate threshold comparison and remediation pathways.
General:
User Experience: minor User Interface enhancements including:
MLC tracking enhancements
Native Dashboard updates for usability
Model Archival: ability to archive a model snapshot and all its related artifacts, allowing for more clean visibility into active models while maintaining auditability.
Specific Details: