There are several factors that influence the adoption of goods and services, but in this article, we focus on two factors: Usability and Trust.
Like any product or service, providers of AI enabled products also seek adoption and continued use of their goods and services. But challenges such as poor usability and consumer lack of trust in the product or service can significantly hinder that adoption. In fact, one of the biggest obstacles in delivering AI-enabled solutions isn’t just building these solutions—it’s ensuring the AI is trusted, useful, and embraced by end users. Public sentiment on AI trust for example show that only 29% of U.S. consumers believe current regulations are sufficient for AI safety, and 72% say more regulation is needed.
To meet adoption goals, it’s essential to empower users to take the transformative step toward trusted AI solutions. Trust and usability are in part key ingredients to increasing AI adoption and must be part of the overall delivery process—especially when aiming for success in complex Order Management Systems (OMS) and other enterprise applications.
Poor Usability Can Hamper AI Application Adoption
Did you know that one in four applications downloaded on mobile devices are never used? And a key contributing factor to this is the poor usability of the application (Hoehle & Venkatesh, 2015). The International Organization for Standardization (ISO) defines usability as “the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use”.
At minimum, without this level of usability, an AI-enabled application will be at risk of being abandoned. For example, a recent study of generative AI applications revealed that resolving usability issues improved the overall user experience. In other words, poor usability negatively impacts customer satisfaction and the ultimate adoption of the product. Academic studies have also shown the significance of usability and user experience in predicting user intent or use of AI applications like Bing Chat or ChatGPT.
To help with the adoption of AI-enabled enterprise solutions, usability must be treated as a core design attribute, not an afterthought.
Trust and Explainability in Enterprise AI Solutions
To further increase the adoption rate of product and service, it’s essential to also build trust in the AI’s capabilities and their outcomes, and not just in its usability—both of which play a critical role in shaping user confidence and long-term adoption.
If you can create trustworthy AI tools that show visible, valuable, and explainable outcomes, you have a winning combination. Without trust—built through explainable AI, governance of data, and ethical design—you risk adoption challenges.
One way to cultivate trust is to provide visibility into what informs the AI model’s internal state and behaviour. For example:
The Building Blocks of Trustworthy AI
Trust can be built in part with some foundational principles that help users invest their confidence in the right AI solution. Key building blocks for trustworthy AI include:
Without these foundational building blocks, the adoption of AI-enabled solutions will remain at risk.
Users today expect usability and trust from AI-powered tools. By embedding usability best practices and trust-building measures into AI design, organizations can in part unlock the full potential of AI adoption—driving measurable value in enterprise systems like Order Management Systems.
In summary, usability and trust play an important role in sustainable AI adoption.
Learn more about IBM’s approach to trustworthy AI and how to design AI-enabled solutions that users will embrace.
By: Owen Chilongo, Product Manager; Ailbhe Cashell, Content Design Manager.
-
Hoehle & Venkatesh (2015)
-
https://peerj.com/articles/cs-2421/