On Demand Webinar: How to monitor your models for ML bias

Download the Presentation On-demand

Master enterprise-level strategies for monitoring machine learning bias

Machine learning bias is one of the major risk factors prohibiting enterprises from adopting AI. With the emergence of “Black Box” solutions, it’s important to be wary of ML outputs, and ensure full transparency of the entire ML pipeline. It’s up to the data professionals in an organization to proactively protect companies and citizens from machine learning bias. As more models deploy to production and effect customers, it’s imperative to have a proper strategy in place for measuring, identifying and monitoring for machine learning bias. With the proper tools and training, data professionals can proactively combat bias, protecting citizens, and companies from a bad reputation. Making a more transparent machine learning pipeline enables trust and explainable AI. Business leaders count on data scientists to produce high performing models that will not tarnish their image or weaken trust from their customers. 


In this webinar, we will help you build a strategy to avoid machine learning bias. We’ll discuss different types of bias that can occur in any use case, how to measure bias, and KPIs to help you monitor for bias. You will learn to leverage tools like Kubernetes, LIME and SHAP to monitor bias, and how to set triggers for low confidence results or accuracy to alert you to possible bias. We’ll show you how to provide transparency in your model development with full traceability, tracking and monitoring mechanisms.

Key takeaways:

  • How to monitor bias in model training using open source tools like LIME and SHAP
  • How to integrate monitoring in your production models
  • How to set triggers and alerts for bias
  • Keeping human-in-the-loop

Master enterprise-level strategies for monitoring machine learning bias

Machine learning bias is one of the major risk factors prohibiting enterprises from adopting AI. With the emergence of “Black Box” solutions, it’s important to be wary of ML outputs, and ensure full transparency of the entire ML pipeline. It’s up to the data professionals in an organization to proactively protect companies and citizens from machine learning bias. As more models deploy to production and effect customers, it’s imperative to have a proper strategy in place for measuring, identifying and monitoring for machine learning bias. With the proper tools and training, data professionals can proactively combat bias, protecting citizens, and companies from a bad reputation. Making a more transparent machine learning pipeline enables trust and explainable AI. Business leaders count on data scientists to produce high performing models that will not tarnish their image or weaken trust from their customers. 


In this webinar, we will help you build a strategy to avoid machine learning bias. We’ll discuss different types of bias that can occur in any use case, how to measure bias, and KPIs to help you monitor for bias. You will learn to leverage tools like Kubernetes, LIME and SHAP to monitor bias, and how to set triggers for low confidence results or accuracy to alert you to possible bias. We’ll show you how to provide transparency in your model development with full traceability, tracking and monitoring mechanisms.

Key takeaways:

  • How to monitor bias in model training using open source tools like LIME and SHAP
  • How to integrate monitoring in your production models
  • How to set triggers and alerts for bias
  • Keeping human-in-the-loop
small_c_popup.png

Join the innovative enterprises using cnvrg.io to build, train & deploy their models faster.

schedule a demo today