View Only

Enterprise AI with AIX

By RAJ KRISHNAMURTHY posted Tue October 19, 2021 01:00 PM


Enterprise AI with AIX


What is Enterprise AI?


Enterprise Systems are the data and transaction “hub” for business. The ability to execute models within the envelope of a transaction to meet SLAs enables real-time inferencing for transactions. Examples, such as, Risk/Fraud scoring, anomaly detection, loan/claim pre-approvals are use cases that can benefit from real-time inferencing. As “fresh” data aggregates inside an operational data store, acting on that data promptly can help extract timely and useful insights. This is sometimes called Operational Analytics. It involves use cases such as dynamic price optimization and moving aggregate triggers.


With the newest of Power Systems, IBM Power10, Matrix Math Accelerators (MMA) for AI are present in every CPU core. This allows models to run close to data and transactions and avoid any additional latencies to call external AI platforms. AIX 7.3 provides the ability for applications to leverage the MMA AI accelerator.


AI with AIX


In-LPAR Inferencing

Models from environments, such as, H2O Driverless AI can be run directly on AIX for inferencing on tabular data. For example, H2O MOJO Models can be run against IBM Db2 data with predictions or classifications pushed back to the database or used in a downstream application. Databases also have the capability to apply machine learning to data inside databases and execute python invoked models as part of UDFs (User Defined Functions) for inferencing.


AI “Side-Cars”: Running AI Adjacent to AIX

Models developed on the cloud or other x86 clusters can be also brought into Linux or Openshift Containers running alongside AIX. These models can be invoked from an AIX process through shared memory or through external web requests. Models invoked through external web requests can leverage data inside databases on AIX as well.


AI Developer Tools

ISVs (Independent Software Vendors) and Developers can build AI applications on AIX. The IBM ESSL (Engineering and Scientific Subroutine Library) 7.1 provides functions to leverage the IBM Power10 AI Accelerator (MMA) without having to write low-level assembly code. IBM OpenXL compilers for AIX 7.3 also provide built-in functions for developers to leverage MMA AI Accelerator functionality in their C/C++ code.


AIX allows model inferencing directly inside a LPAR hosting a database. This removes latency to external AI platforms. AIX 7.3 applications can leverage the MMA AI accelerator on IBM Power10 systems to meet SLAs and support real-time inferencing use-cases. Libraries and compilers available on AIX support development of applications leveraging the IBM Power10 AI accelerator on AIX 7.3.

CE-AIX: Center for Enterprise AI eXchange at the University of Oregon

This Center has been setup at the University of Oregon for students, researchers and center partners to collaborate around Enterprise AI. Accounts on these systems are available for interested parties. These systems may be used by Center Partners (IBM Customers, Business Partners, ISVs) to experiment and explore around Enterprise AI. Researchers at the Center can collaborate around ModelOps, AI ethics, Robustness and other contemporary AI challenges.