Introduction
IBM Watson Natural Language Understanding (NLU) is a cloud native product that uses deep learning to extract metadata from text such as keywords, emotion, and syntax. IBM Watson NLU uses text analytics from IBM Watson NLP Library under the covers to extract categories, classification, entities, keywords, sentiment, emotion, relations, and syntax.
The Intel oneAPI Deep Neural Network Library (oneDNN) provides highly optimized implementations of deep learning building blocks. With this open source, cross-platform library, deep learning application and framework developers can use the same API for CPUs, GPUs, or both—it abstracts out instruction sets and other complexities of performance optimization. The oneAPI Library allows deployment of applications optimized for Intel CPUs and GPUs without writing any target-specific code for faster application performance, more productivity, and greater innovation.
But what does this mean for NLU?
By incorporating Intel oneAPI library, NLU can take advantage of new hardware features and accelerators on Intel Xeon based infrastructure to improve performance. In tests based on Intel® Xeon® Silver 4210 CPU, an upwards of 35% improvement in function throughput was observed in classification, sentiment and entity use cases.
Figure 1. Average performance using oneDNN with IBM Watson® NLP
Conclusion
This is another example of how Intel and IBM Cloud are better together. IBM Watson Natural Language Understanding customers can take advantage of these latest performance enhancements as of August, 2022, helping NLU to provide improved latency to customers at scale to handle millions of API requests every day.
To learn more:
#EmbeddableAI#WatsonAPIs#data-ai-highlights-home