AI Fairness: How to Measure and Reduce Unwanted Bias in Machine Learning

 View Only

AI Fairness: How to Measure and Reduce Unwanted Bias in Machine Learning 

Thu April 09, 2020 08:36 PM

How to Measure and Reduce Unwanted Bias in Machine Learning takes data science leaders and practitioners through the key challenges of defining fairness and reducing unwanted bias throughout the machine learning pipeline. It offers key reasons why data science teams need to engage early and authoritatively on building trusted AI. And it explains in plain English how organizations must think about AI fairness as well as the tradeoffs that must be made between model bias and model accuracy. Much literature has been written on the social justice aspects of algorithmic fairness; in contrast, this report focuses on how teams can mitigate unfair machine bias by using the open-source tools available in AI Fairness 360. 

Three most important things readers will learn from this book are:

  1. Developing unbiased algorithms is a data science initiative that involves many stakeholders across a company, and there are several factors to be considered when defining fairness for your use case (i.e. legal, ethics, trust).
  2. There are several ways to define fairness, which leads into many different ways to measure and remove unfair bias. Bias must be measured with as many metrics as possible throughout the machine learning pipeline.
  3. There are many tradeoffs that must be made between model accuracy vs. unfair model bias, and organizations must define acceptable thresholds for each.


O'Reilly book available exclusively for download in the IBM Community!



#GlobalAIandDataScience
#GlobalDataScience
#Highlights-home

Statistics

0 Favorited
1901 Views
1 Files
0 Shares
41 Downloads
Attachment(s)
pdf file
OReilly AIFairness360 Book.pdf   2.03 MB   1 version
Uploaded - Thu April 09, 2020