Authors: Erika Agostinelli, Karol Dufour-Kruszewska, Dusan Magula, Rob Woods (IBM Data Science and AI Elite Team)
A cookbook to create and deploy a scalable and transparent solution to retain your customers using hyper personalization and explainability.
Introducing IBM Industry Accelerators
After completing 240
+ client engagements worldwide, the IBM Data Science and AI Elite (DSE) team created templated packages, or Industry Accelerators, to address use cases based on learnings from these engagements.
An industry accelerator is built on IBM Cloud Pak for Data platform, enabling organizations to climb all stages of the AI Ladder, using the solution on their own data in an accelerated timeframe.
The Customer Attrition Accelerator addresses the wealth management industry’s everchanging market by aiming to retain customers through hyper-personalization and trustworthy AI.
We will discuss the end-to-end data science flow that the accelerator offers and show how it can positively impact the user journey of three personas, each with their own pain points:
- Ellen – the retention strategist / advisor
- Jess – the data analyst / citizen data scientist
- Rob – the data stakeholder
Ellen: the retention strategist / advisor
“I want to understand why my customers leave.”
Main pain points:
- She doesn’t fully trust the AI Systems because they are black boxes
- She wants to be at the core of the decision-making process instead of being replaced by AI
- The information of the client is scattered across different dashboards making difficult to assess a single client in one view
To understand why individual customers are at risk and to effectively retain them, Ellen is going to interact with AI infused front-end application in the following way.
The accelerator comes with a sample web application deployed within IBM Cloud Pak for Data, centralising information in one point of entry. In the main page, the application prioritises the top clients with highest risk of attrition and in the single view, she can investigate why the predictive model calculated such risk level for a specific client.
Explainability and transparency, both part of trustworthy AI, are crucial aspects for adoption. If Ellen can understand the rationale of the model, she will be more prone to trust the AI solution. Finally, the application recommends possible retention strategies to choose from. This technology is called contrastive explanation and, in this case, helps you identify how to lower attrition risk (from high to low). In the example below (Figure 3), it seems that we need to increase the customer satisfaction rating to avoid attrition. This drives Ellen’s decision to directly contact the client to offer a new deal or additional discounts. AI doesn’t replace Ellen, instead, it augments her decision-making process.
JESS: the data analyst / citizen data scientist
“I want low-code data science alternatives and a simple way to deploy and monitor my models.”
Main pain points:
- She is a highly skilled data professional who has no experience with programming language, and she is looking for low-code to no-code data science tools to create predictive models
- The data access requests take too long to be processed, delaying any progress of data science projects
- She lacks the independence to deploy and monitor AI models because of its complexity and she constantly needs help from other data professionals
IBM Cloud Pak for Data allows Jess to shop for the necessary customer data by using the one-stop shop for all the customer dataShe uses a visual tool called Modeler Flow, that allows her to quickly shape and prepare the data into the right format and then feed the data into AutoAI which is an automatic model builder that offers built-in hyper parameter optimization and feature engineering. The best performing machine learning model is now ready to be deployed and the platform offers an easy one-click deployment functionality. The deployed model now can be monitored by setting up the following monitors in Watson Studio: Jess wants to be notified when the model needs to be retrained (quality and drift issues), when the model is not fair to a specific group of customers (fairness issues) and ensuring transparency of the model rationale (explainability). All four monitors are crucial to ensure that trustworthy AI is infused in the solution.
#GlobalAIandDataScience#GlobalDataScience