In this post, I will break down the ideas behind IBM’s Data Fabric entry point called “MLOps and Trustworthy AI.” With as little jargon as possible, I will help you learn what the solution entails, what problems it solves, and why it matters to any organization who wants to build AI applications.
Businesses build AI and ML applications to gain tangible benefits through improved ROI that either generates revenue, saves costs, or both. AI applications require the building, deploying and monitoring of AI models. These AI models are built using common patterns from historical data and then are fed unseen data to make predictions on future events. There are a variety of model types available to solve a wide range of business problems, everything from predictive analytics (e.g. should we offer a customer a promotion?) to customer segmentation (e.g. grouping similar purchase behavior for advanced marketing) to forecasting and more.
As organizations progress on their AI journey, three common problems emerge: lack of the right data, deploying models efficiently, and developing a repeatable pattern to do so given growing compliance challenges.
Problem #1: Lack of the right data
The first and most common problem organizations face is lack of the right data. According to 451 research, 74% of AI adopters report data access as a challenge. This is because data is often unorganized, not properly labeled, located in disparate sources, or lacking consistent access permission rights. Data scientists spend the majority of their time searching, accessing, organizing, cleaning, and preparing data for modeling. Algorithms and models are only as good as the data used to create them, and many organizations lack the confidence in their data. A similar problem is also faced while gathering the right data for independent validation of models, which is essential to trust that model in development. The right type of data collection is a huge problem that affects almost every organization that has ever attempted any type of AI or ML application.
Problem #2: Deploying models efficiently
The second most common problem is deploying models into a production environment where it can be utilized to achieve business outcomes (i.e. generate revenue, reduce costs, or both). Let’s say a data scientist solves the data issue and manages to get enough data to build a model. The likelihood that this model will make it to production – that is scored using unseen data to automate a business problem – is way too low. In fact according to Gartner, most (53%) of AI/ML projects are stuck in these pre-production phases. In addition, Forrester pointed out it took a major US bank 2 weeks to develop a fraud model, 2 weeks to document, and 8 months to deploy it to production environment…an unfortunately common occurrence.
There are a variety of factors preventing models from entering production. According to Gartner, organizations often do not have a formal operationalization methodology to place models in production leading to confusion and delays. In addition, there are often communication gaps between the various teams (ie data, ML and application engineering teams) leading to further delays. Lastly, many teams report challenges explaining how a model was built, which data used to create it, and why the model recommended a certain decision. This explainability problem is becoming increasingly important as data scientists are asked to explain to business leaders how a model operates because of irresponsible AI risks.
Problem #3: Repeatability with Compliance
The last problem of regulation and compliance is really a repeatability issue. Placing a single model in production is a challenge. However, deploying dozens more while respecting new compliance regulations like GDPR and SRII-7 for Financial Services can become insurmountable to data scientist teams. The ability to scale model building is particularly pertinent to heavily regulated industries such as Healthcare and Financial Services but also applicable to any organization intending to have multiple models in production.
- People –model validators who approve all models before they go into production
- Documentation – systems to automatically log model history and usability
- Workflows – data science flows that can be reused again and again
Solution: MLOps and Trustworthy AI
Great news: these three top problems are solved by the MLOps and Trustworthy AI entry point to Data Fabric.
The data problems are solved by integration with Watson Knowledge Catalog (WKC). The catalog with the power of the Data Fabric provides data scientists a complete view of quality data that is governed, self-served, and ready for analysis by multiple stakeholders. AI models are only as good as the data on which they’re built. And what WKC provides is high quality, meta data enriched, curated data that’s ready for consumption by data scientists. This provides the foundation for trustworthy AI.
The model ops and deployment problems are solved by Watson Studio. It includes a variety of model building tools, the ability to easily put them in production, capabilities to explain the decisions those models make (in natural language), as well as checking them for bias, and automatically monitoring them over time.
Finally, the repeatability problems are solved by establishing a repeatable process with Watson Studio and Watson OpenPages. Here we include pipelines to repeat workflows, factsheets to automatically document all information about a model throughout its lifecycle (e.g. history, dependencies, etc.), and OpenPages to provide Governance automation for models with dashboard and validation control. Together, these features establish a governed AI process that is reliable and repeatable. This level of product support is essential to meet regulatory requirements, ensure data scientists are not taxed with maintenance overhead, and to ensure governance and privacy throughout the entire AI lifecycle.
In short, the MLOps and Trustworthy AI solution uses key components of Cloud Pak for Data to remove the most common blockers organizations face while building AI applications. With it, organizations can confidently deploy their AI Models in production in a repeatable, scalable, and trustworthy way aligned with legal and internal compliance requirements.
- Learn more about Factsheets, new to Cloud Pak for Data 4.5, here
- Follow theinstructions here to sign up and take our Trustworthy AI demo. Click the “try it free” dark blue button.