Part 3: Monitor and govern models: From explainable AI to responsible AI

 View Only
When:  May 19, 2021 from 09:00 AM to 10:15 AM (PT)

IBM presents: What's next in AI?

AI 2.0 and making REAL impact today

Enterprise use of AI has been undergoing rapid evolution. Harnessing the power of prediction and optimization, industry leaders are demonstrating superior results in customer experience, product development and operational agility. At the same time, the speed of AI innovation is accelerating, from AI engineering to neurosymbolic AI to explainable AI, to name a few. With AI becoming more fluid and adaptable for enterprise use, AI systems can provide different forms of knowledge, unpack causal relationships, and learn new things on its own. To help you stay current with what’s next in AI and how to take advantage of the latest innovations, IBM is excited to present our 5-part data and AI series. We will cover:

Part 3: Monitor and govern models: From explainable AI to responsible AI

Holding the promise of delivering highly tuned predictions, broad adoption of AI systems will rely heavily on our ability to trust their output. To trust a decision made by an algorithm, we need to know the reliability of the model, how to account for the process, and the level of risk associated with the use. Featuring KPMG, part 3 will explore:

  • What happens when we put AI models into production?
  • How to monitor and govern models for explainability, fairness, drift and risk
  • Common patterns in model performance
  • What it means to support responsible AI


Kelly Combs, Director, KPMG

Kush Varshney, IBM Research

Julianna Delua, Data Science and AI SME, IBM Data and AI