Global AI and Data Science

 View Only

How to Measure and Reduce Unwanted Bias in Machine Learning

By Trisha Mahoney posted Thu April 09, 2020 10:00 PM

  

How to Measure and Reduce Unwanted Bias in Machine Learning takes data science leaders and practitioners through the key challenges of defining fairness and reducing unwanted bias throughout the machine learning pipeline. It offers key reasons why data science teams need to engage early and authoritatively on building trusted AI. And it explains in plain English how organizations must think about AI fairness as well as the tradeoffs that must be made between model bias and model accuracy. Much literature has been written on the social justice aspects of algorithmic fairness; in contrast, this report focuses on how teams can mitigate unfair machine bias by using the open-source tools available in AI Fairness 360. 

Three most important things readers will learn from this book are:

  1. Developing unbiased algorithms is a data science initiative that involves many stakeholders across a company, and there are several factors to be considered when defining fairness for your use case (i.e. legal, ethics, trust).
  2. There are several ways to define fairness, which leads into many different ways to measure and remove unfair bias.  Bias must be measured with as many metrics as possible throughout the machine learning pipeline.
  3. There are many tradeoffs that must be made between model accuracy vs. unfair model bias, and organizations must define acceptable thresholds for each.


Download your free version HERE!


#GlobalAIandDataScience
#GlobalDataScience
3 comments
33 views

Permalink

Comments

Sun April 19, 2020 07:51 AM

Indeed cool stuff! Fairness plays a key role in any organisation. 
Thanks for giving this wonderful book for free.

Mon April 13, 2020 03:47 PM

Great stuff :-)

Fri April 10, 2020 11:52 AM

Congrats!