Short and sweet from 6–7pm Eastern Time:
Machine learning models are increasingly used to inform high-stakes decisions. Discrimination by machine learning becomes objectionable when it places certain privileged groups at the systematic advantage and certain unprivileged groups at a systematic disadvantage. Bias in training data, due to prejudice in labels and under -or oversampling, yields models with unwanted bias. This session will explore open source tools to detect/mitigate bias, increase transparency, and enable governance in ML models.
Speaker: Saishruthi Swaminathan, Technical Lead & Data Scientist at IBM.
Check Meetup.com for more info and to RSVP.