How to run AI inferencing on IBM Power10 leveraging MMA

 View Only

How to run AI inferencing on IBM Power10 leveraging MMA 

Mon October 24, 2022 10:08 AM

How to run AI inferencing on IBM Power10 leveraging MMA


By 
Charisse Lu, Peter Westerink, Christine Ouyang

As the world is set to deploy AI everywhere, attention is turning from how to quickly and accurately build and train AI models to how to rapidly implement those models, make inferences, and gain insights. IBM systems can help accelerate these implementations by running AI "in place" with four new Matrix Math Accelerator (MMA) units in each Power10 core. MMAs provide an alternative to external accelerators, such as GPUs, and related device management, for executing statistical machine learning and inferencing (scoring) workloads. It reduces costs with data center footprint, infrastructure management and support, and leads to a greatly simplified solution stack for AI to enable the creation of a data fabric architecture. Leveraging data gravity on Power10 allows AI to execute during a database operation or concurrently with an application as you can see in the demos described in this blog, which is key for time-sensitive use cases such as fraud detection. This delivers fresh input data to AI faster and enhances the quality and speed of insight.

In this blog, the authors explain how to use the new MMA technology, show the container-based approach we used to define the demo framework, and demonstrate the value of the MMA technology in six real-world scenarios.

Read the entire blog to learn more: 


#Featured-area-1 #Featured-area-1-home 

Statistics

0 Favorited
454 Views
0 Files
0 Shares
0 Downloads