Ask a question
Limited-Time Offer: Get on the waiting list now for the 2025 Conference,
happening October 6-9 in Orlando, FL, and reserve your 50% “exclusive early rate” discount.
With increasing use of AI in critical business workflows and decision making, AI systems must be trustworthy. A combination of industry standards, technology, and an independent and authoritative framework are all key for organizations to be able to tangibly demonstrate the trustworthiness of their AI systems . The Responsible AI Institute provides a comprehensive framework that evaluates technical and social aspects together throughout the lifecycle of an AI solution. IBM's technology captures both the process and specific metrics needed to align with RAII's assessment framework. This session explores how IBM and RAII make it easier for businesses to prove their AI systems are trustworthy.
Ashley Casovan is a recognized technology policy expert who has dedicated her career to ensuring data and technology are used in a manner that protects the public. Primarily working in the public sector, her experience lies at the intersection of responsible AI governance, standards, and data governance. Ashley’s ultimate mission is to provide safe, responsible technology that is equitable and safe for all.
Ashley currently serves as the Executive Director of the Responsible AI Institute (RAII), a multi-stakeholder non-profit dedicated to mitigating harm and unintended consequences of AI systems. RAII’s current objective is completing the architecture for the world’s first independent, accredited certification for responsible AI systems. Previously, Ashley served as the Director of Data and Digital for the Government of Canada, where she led the development of the first national government policy for responsible AI.