Financial institutions are moving quickly to adopt artificial intelligence (AI) techniques in areas such as fraud detection, marketing, credit underwriting, chatbots, and anti-money laundering compliance among others. But these firms should weigh the promise of cost savings, competitive advantage, and new product offerings against significant potential problems: unexplainable models, biased or unfair algorithms, increased strain on data and IT systems, changes in required skillsets, and lack of trust in models by management and consumers.
While financial institutions face changes to their operational, reputational, and other risks when they deploy AI, they can address these risks by carefully enhancing the same governance processes and frameworks already in place. Although many large institutions are beginning to address explainability, significant challenges still remain in using the evolving toolbox of explainability techniques. These challenges include identifying the precision of explainability required for different applications; aligning key stakeholders on acceptable toolkits, approaches, and uses; developing standards for vended AI and explainable AI solutions; and convincing regulators that AI and explainable AI approaches are fit for purpose.
AI Governance and Explainability in Financial Services: DSE Presents Chat with the Lab webinar is now available to watch on demand here.
Please share your questions below.
------------------------------
JORGE CASTANON
Chat with labs webinar series:
https://ibm.co/Chat-With-The-Lab-Webinar------------------------------
#AIandDSSkills#AIandDSSkills#DataandAILearning