ISV Ecosystem - Group home

Let’s Talk About IBM’s Granite Models for GenAI

  

Here’s why IBM’s Granite GenAI models stand out from the crowd

IBM’s Granite foundation models are purpose-built generative AI solutions, designed to seamlessly integrate GenAI within business applications and workflows. They’re available to customers on IBM watsonx, IBM’s data and AI platform for generative AI, foundation models and machine learning for business. 

Here are three key ways in which the Granite series stands out when compared with many of the other foundation models.

  1. They’re specifically created for business 

The Granite series models are custom-built for enterprise business applications, designed to meet the higher levels of precision and accuracy required in business environments compared to consumer AI initiatives.

There are different variants of Granite, such as Granite-20b-multilingual, Granite-13b-chat and Granite-13b-instruct  all suited to particular use cases. Granite-13b-chat is optimised for dialogue use cases and works well with virtual agents and chatbots.  Granite-13b-instruct supports tasks like question answering, summarization, classification, generation, extraction, and retrieval augmented generation (RAG) implementations - such as searching enterprise knowledge bases to generate tailored responses to service customer service questions. Granite-20b-multilingual, which supports tasks in French, German, Portuguese, Spanish, and English, and Granite-8b-japanese, which supports tasks in Japanese.

Importantly, Granite models can be easily integrated with existing IBM tools such as Watson Assistant and Watson Discovery, allowing businesses to enhance customer service and data discovery capabilities seamlessly. By using prompt engineering via IBM’s watsonx platform, the Granite models can be tailored to particular industry or company-specific tasks and applications. 

To build Granite, IBM curated a massive dataset of relevant unstructured language data from sources across academia, the internet, enterprise (e.g., financial, legal), and code.

By training the models on enterprise-specific datasets, IBM ensures familiarity with the specialised language and jargon from these sectors. That means that AI decisions and content can be informed using relevant industry knowledge. For example initial IBM Research evaluations and testing suggest that Granite-13b is one of the top-performing models on finance tasks - potentially performing better than much larger models - because they’re trained on high-quality finance data. 

  1. They’re built for trust

There are concerns about the data that’s used within GenAI and businesses are mindful of the risks. Many of the largest foundation models have been trained on examples from the internet. It’s hard to be sure exactly what data they were trained on and there’s a chance it could include data from the darker corners of the web. 

These issues are compounded by the media stories of AI wildly hallucinating facts, displaying bias or even insulting customers. Quite clearly if you don’t have confidence in the predictions and content that AI models generate, then it’s an instant fail.

This is another reason why IBM has prioritised using domain-specific datasets for model training. It’s also why IBM has put the Granite training data through a rigorous end-to-end process which includes filtering out duplicates, copyrighted material, poor quality data, data with GDPR protections etc. The data has been passed through IBM’s HAP detector, a language model that detects and roots out hateful and profane content (HAP).

 

Granite models have been designed to meet the strict data governance, regulatory and risk criteria defined and enforced by the IBM AI Ethics code and Chief Privacy Office. 

 

Moreover, companies can take added confidence and peace of mind from the fact that IBM will indemnify clients against third party IP claims on IBM-developed foundation models such as Granite. Unlike other large language model providers and consistent with IBM’s standard approach to indemnification, IBM does not require customers to indemnify IBM for a customer's use of IBM-developed models. Importantly, IBM does not cap its indemnification liability for IBM-developed models ensuring robust protection for customers.

 

A related point is that the Granite models are designed to ensure data security and compliance. They can be deployed within a customer's private cloud or on-premises, enabling data to stay within a trusted environment.

  1. They prioritize efficiency 

IBM is focused on finding ways to implement AI technologies that use computing resources more efficiently while shrinking AI’s carbon footprint. As 13 billion parameter models, the Granite series are more efficient than larger foundation models. They can fit onto a single V100-32GB GPU (Graphics Processing Unit) while larger models - which can be over 100b+ parameters - require many GPUs, adding infrastructure complexity and cost. 

 

The Granite models strike a balance between performance and efficiency when compared to the much larger models. They can deliver strong results on key enterprise tasks while being more cost-effective to deploy and run.

 

IBM's internal benchmarking found the Granite models deliver high accuracy with lower hardware requirements compared to larger models, achieving better price-performance. The focused training of IBM’s models means they can do more with fewer parameters.

 

For example, while they are over 5 times smaller than models like LLaMA-2 70b, Granite (at 13b) matched or outperformed it on 9 out of 11 tasks in a financial services benchmark. This is because Granite is trained on higher-quality finance-specific data underlining that sheer size is not always necessary for best results, especially in specialized domains. 

 

There’s an environmental benefit too. The Granite 13b models have a significantly lower carbon footprint to train and run inference on than massive 100b+ or 1T+ parameter models, while still performing competitively for enterprise use cases.

IBM is continuing to evolve and expand Granite. For example, the company recently announced the open-sourcing of four variations of the Granite code models, ranging from 3 to 34 billion parameters, under the Apache 2.0 license. They are accessible on Hugging Face (https://huggingface.co/ibm-granite) and Github (https://github.com/ibm-granite).

Interest in IBM’s AI tech is growing. Samsung and Citi were reportedly among the first 150 corporate customers already using watsonx as of July 2023 when it began rolling out. While specifics have not been disclosed, companies will be leveraging the watsonx platform and Granite to build and scale GenAI applications tailored to their business needs.

Granite, as part of watsonx, is poised to accelerate enterprise adoption of AI, ushering in a new era of AI-powered innovation and competitive advantage. Enterprises that embrace this opportunity to deploy generative AI with trust and transparency will be the ones that thrive in the years ahead. 

There’s more about IBM’s Granite model series on watsonx here