Removing Unfair Bias in the Machine Learning Pipeline @ TWIMLcon: AI Platforms

Global AI & Data Science

Train, tune and distribute models with generative AI and machine learning capabilities

 View Only
Machine learning models are increasingly being used to make critical decisions that impact people’s lives. However, bias in training data, due to prejudice in labels and under or oversampling, can result in models with unwanted bias. Discrimination can become an issue when machine learning models place certain privileged groups at systematic advantage. This talk will provide an introductory look at how bias & discrimination occur in the machine learning pipeline and the methods that can be implemented to remove it. Trisha Mahoney will walk you through AI Fairness 360, a comprehensive bias mitigation toolkit developed by IBM researchers. AI Fairness 360 includes a set of open source Python packages with the most cutting edge metrics & algorithms available across academia and industry today. Learn how to measure bias in your data sets & models, and how to apply the fairness algorithms to reduce bias across the machine learning pipeline.

Join us for a workshop by Trisha Mahoney at TWIMLcon: AI Platforms.  

HEAD_SHOT_HIGH_RES.pngTrisha Mahoney | IBM Technical Evangelist for Machine Learning & AI

Trisha Mahoney is a Technical Evangelist for Machine Learning & AI at IBM. Trisha has spent the last 9 years working in high-tech firms doing product management/marketing roles in AI & Cloud (at IBM, Salesforce, Cisco and Smiths Group). Prior to that, Trisha spent 8 years working as a data scientist in the chemical detection space. She holds an Electrical Engineering degree and an MBA in Technology Management.

#GlobalDataScience
When:  Oct 1, 2019 from 02:00 PM to 02:25 PM (PT)

Where

Mission Bay Conference Center at UCSF
TWIMLcon: AI Platforms - Robertson 2 Room
1675 Owens Street
San Francisco, CA 94143