AI on IBM Z & IBM LinuxONE - Group home

IBM Z Optimized for TensorFlow

  

IBM Z Optimized for TensorFlow

IBM is excited to introduce the open beta program for IBM Z Optimized for TensorFlow. This open beta program enables IBM z16 clients to obtain an enhanced TensorFlow build that can leverage the IBM z16 Integrated Accelerator for AI.

 

Accelerated Model Inference with IBM z16

 

With the z16, IBM has brought the Telum processor to market.

First announced in 2021, IBM Telum features a dedicated on-chip AI accelerator focused on delivering high-speed, real-time inferencing at scale. This feature is designed to accelerate the compute intensive operations commonly found in deep learning models. You can read more about the Integrated accelerator for AI in our blog and in our recent z16 announcement letter.

 

While the IBM Integrated Accelerator for AI is clearly an amazing technology, the story would be incomplete without enabling the AI software ecosystem to utilize it.

 

With IBM Z Optimized for TensorFlow, we are enabling one of the most popular open-source frameworks for AI to leverage the IBM z16 Integrated Accelerator for AI. TensorFlow is an end-to-end open AI platform with a rich and robust ecosystem. We have observed strong adoption of TensorFlow in our enterprise clients due to the wide range of capabilities it provides - in particular, the focus on deployment of AI assets.

 

On IBM zSystems and LinuxONE, TensorFlow has the same ‘look and feel’ as any other platform. You can continue to build and train your TensorFlow models on the platform of your choice – whether x86, cloud, or IBM zSystems. TensorFlow models trained on other platforms are portable to zSystems.  

 

With IBM Z Optimized for TensorFlow, you can now bring TensorFlow models to IBM z16 and exploit the Integrated Accelerator for AI - with no model changes. IBM Z Optimized for TensorFlow will detect the operations in your model that are supported by the Integrated Accelerator for AI and transparently target them to the device.

 

As we work to further optimize TensorFlow to get the best possible benefit out of the Integrated Accelerator for AI, our goal is to focus on models and use cases that are important to our customers. In making this support available as an open beta, we are enabling our z16 customers to test the support and provide feedback to help drive our optimization efforts.

 

  

The Road to General Availability

 

Over the coming months, we expect to continue to deliver enhancements to this support as we move towards completing our first development phase. As part of this work, we plan to expand our optimization efforts to focus on a broader set of models. Additionally, it is our intent to ensure TensorFlow Serving can be utilized to leverage IBM Z Optimized for TensorFlow.

Ongoing updates and additional information on progress to these goals will be found here.


 

Model Usage


IBM Z Optimized for TensorFlow follows IBM's train anywhere and deploy on IBM Z strategy. We’ve provided detailed documentation on deployment, model validation, execution on Integrated Accelerator for AI, modifying default execution paths, etc.

 

Details are in the document, but the general procedure is as follows:

  • Build and train the TensorFlow model using the platform of your choice.
  • Deploy the model on the IBM Z Optimized for TensorFlow container
  • On IBM z16 system, TensorFlow graph execution will transparently target the Integrated Accelerator for AI for several compute-intensive operations during inferencing with no changes necessary to TensorFlow models.

 

We’ve provided sample scripts and a detailed tutorial that includes download and setup instructions, as well as steps that assist with running the samples inside the container. For samples and tutorials please visit the github repository here.


 

Downloading the open beta


IBM Z and LinuxONE Container Image Registry (ICR) includes open-source software in container images that are often used as the foundation for new composite workloads. ICR provides a secure and trustworthy content source. On the IBM Z and LinuxONE Container Registry, the IBM Z Optimized for TensorFlow image is freely available. The image runs in both the Linux and zCX environments of z/OS on IBM Z.

Information on downloading the IBM Z Optimized for TensorFlow Container image is outlined here.

 

 

Becoming a sponsor user

Sponsor users are active participants who work alongside our product teams to help refine IBM products by providing feedback, ideas, and domain expertise. You will be participating in our research study to understand your current approaches to AI for our team to build future products that will benefit you.

 

If you'd like to be a sponsor user, please email us at aionz@us.ibm.com

 

Note: In addition to TensorFlow, other AI capabilities are available to leverage the Integrated Accelerator for AI. More information can be found here: https://ibm.github.io/ai-on-z-101/