The Lifecycle View of Trustworthy AI

 View Only

The Lifecycle View of Trustworthy AI 

Wed October 27, 2021 12:41 PM

As artificial intelligence (AI) increasingly powers critical and high-risk enterprise workflows, developers of AI systems must ensure that the decisions AI makes for people can be trusted. Our team at IBM has previously described the five pillars of trust: fairness, robustness, explainability, privacy, and transparency. To help developers create trusted AI solutions, IBM Research has released multiple open-source toolkits in this space based around the pillars — AI Fairness 360, Adversarial Robustness 360, AI Explainability 360, AI Privacy 360, Uncertainty Quantification 360, Causal Inference 360, and AI FactSheets 360.

Since the first release of these tools, the team has learned a lot about designing essential tools for AI developers. In the process, we’ve realized that pillars may not be the most intuitive way for developers to pick up new tools. Today, we’ll talk through the next evolution of trust tools, where we start to design around the AI lifecycle.



0 Favorited
0 Files