We recently had the pleasure of working with Seattle, WA-based AICamp on putting together a series of virtual meetups around AI Trust.
You can watch all five parts on demand, see the videos below.
Part 1: Privacy-Preserving Machine Learning
Data privacy is a huge concern and often prevents ML and AI project from flourishing. In this talk we will introduce you to federated learning and homomorphic encryption. After we’ve covered the theoretical aspects we will see how they can be used in practice. We conclude with an outlook on the future of these technologies.
Romeo Kienzler is Chief Data Scientist at the IBM Center for Open Source Data and AI Technologies (CODAIT).
This session took place on July 27, 2020.
Part 2: Explainable AI Workflows Using Python
This talk approaches the typical data science workflow with a focus on explainability. Simply put, it focuses on skills and tactics used to help data scientists articulate their findings to end-users, stakeholders, and other data scientists. From data ingestion, cleaning and feature selection, and ultimately model selection, explainability can be incorporated into a data scientists workflow. Using a combination of semi-automated and open source software, this talk walks you through an explainable workflow.
Austin Eovito is a Data Scientist at IBM, who focuses on the balance of bleeding-edge research produced by academia and the tools used in applied data science.
This session took place on August 10, 2020.
Part 3: Understanding and Removing Unfair Bias in ML
Extensive evidence has shown that AI can embed human and societal bias and deploy them at scale. And many algorithms are now being reexamined due to illegal bias. So how do you remove bias & discrimination in the machine learning pipeline? In this webinar you’ll learn the debiasing techniques that can be implemented by using the open source toolkit AI Fairness 360. AI Fairness 360 (AIF360) is an extensible, open source toolkit for measuring, understanding, and removing AI bias. AIF360 is the first solution that brings together the most widely used bias metrics, bias mitigation algorithms, and metric explainers from the top AI fairness researchers across industry & academia.
Upkar Lidder is an IBM Data Science and AI Developer Advocate.
This session took place on August 17, 2020.
Part 4: Adversarial Robustness 360 Toolbox For ML
Adversarial samples are inputs to Machine Learning models that an adversary has tampered with in order to cause specific misclassifications. It is surprisingly easy to create adversarial samples and surprisingly difficult to defend ML models against them. This poses potential threats to the deployment of ML in security critical applications. In this webinar I will review the state-of-the-art on adversarial samples and discuss recent progress in developing ML models that are robust against adversarial samples. Most time will spent on looking how to use the Adversarial Robustness Toolbox (ART) open source project to evaluate the robustness of ML models under various types of threats.
Mathieu Sinn leads global IBM efforts on developing and proving out robust, secure and privacy-preserving AI. He and his team lead several open source projects in this space and partner with world-class R&D organizations from industry and academia to help advance trustworthy and responsible AI.
This session took place on August 24, 2020.
Part 5: Proactive Explanations: Python Workflows for Data Science and AI (workshop)
In this workshop, we will talk on the typical data science workflow with a focus on explainability. It focuses on skills and tactics used to help data scientists articulate their findings to end-users, stakeholders, and other data scientists. From data ingestion, cleaning and feature selection, and ultimately model selection, explainability can be incorporated into a data scientists workflow. Using a combination of semi-automated and open source software, This workshop will expand and go deeper on the previous webinar, and walks you through an explainable workflow.
Austin Eovito is a Data Scientist in IBM.
This workshop took place on August 31, 2020.
Please note! The workshop will be delivered two more times in September. Follow the links to reserve your spot:
Hope you enjoy learning about this important area. Let us know any feedback and suggestions. Thanks!
#Featured-area-2
#Featured-area-2#Featured-area-2-home#GlobalAIandDataScience#GlobalDataScience