IBM Z and LinuxONE - IBM LinuxONE Ecosystem

IBM LinuxONE Ecosystem

IBM LinuxONE Ecosystem

Explore IBM LinuxONE ecosystem to partner, learn and connect

 View Only

Journey inside the IBM LinuxONE 5

By Elizabeth K. Joseph posted Tue May 06, 2025 12:01 AM

  

Today we're excited to announce the IBM LinuxONE 5. As the name suggests, this is the fifth iteration of the LinuxONE system, and I'll let you in on a little secret: LinuxONE is the what got me excited about mainframes. Each release since I joined IBM in 2019 has built upon that excitement as the hardware gets more advanced and I'm continually surprised at how the researchers and engineers at IBM have been able to forecast the needs of LinuxONE clients and the broader industry years in advance. Before the world was introduced to consumer-friendly AI, chip designers at IBM were already working the silicon to develop the next AI chip. So what is this hardware I'm so hyped about? Let's dive in.

A couple weeks ago I wrote A Tour Inside the IBM z17. A key thing to know is that LinuxONE benefits from a lot of the features and innovation that go into IBM Z, so when you look at specifications and the overall chassis, there is a considerable amount of overlap. In fact, the Redbooks I referenced at the end of my z17 tour will have deeply technical information that's the same for LinuxONE. So instead of repeating everything I wrote in that blog post, I'll go into a bit of what makes this really special for Linux. 

First off, you have the IBM Telum II for IBM LinuxONE. In this iteration, it's presented as an Integrated Facility for Linux (IFL) processor, which means that it is dedicated to running Linux. Just like with its predecessor, once you're up and running, familiar tooling like openSSH will tap into things like the CP Assist for Cryptographic Functions (CPACF), and gzip will automatically leverage the Nest Accelerator Unit (NXU) for compression and decompression. The new Telum II processor is a 5nm microprocessor (down from 7nm) with 8 high-performance cores running at 5.5GHz and a 40% increase in on-chip cache capacity over Telum. It’s a remarkable processor. And the drawers your processors live in is also packed with memory, adding up to 64 TB of memory in a fully loaded, four-frame system across four CPC drawers.

Close up of a Telum II wafer

The coolest thing about this processor is what I alluded to earlier, the new AI accelerator. It builds upon what was designed for the Telum, but now does it even better, and opened the door for the second big announcement that was made alongside Telum II: IBM Spyre Accelerator. I know we've been hearing a lot about AI lately, and I sympathize with those who wish to distance themselves from the hype, but now that I've been working with enterprise environments for a few years I've realized that there is some really important work to be done (and secured!) that LinuxONE has been built to tackle. The 32-core Spyre accelerator takes the technology from Telum II and allows you to build that out to support up to 48 Spyre cards in a fully-loaded system. For a lot of organizations, this is a game-changer.

Fancy rendered image of the IBM Spyre card, in orange (because it's in a LinuxONE)!

Where does LinuxONE stand in this AI work that will use this great new hardware? I quite like the AI on IBM LinuxONE page, but the real hidden gem that I always refer to is the AI on IBM Z 101 page over on GitHub that a bunch of my IBM AI geek friends maintain. Not only does it delve into some of the IBM tooling available, but it also mentions open source tooling you can use like TensorFlow and PyTorch. When I reached out to Andrew Sica for the latest he shared that there is PyTorch support for the Telum II on-chip accelerator. Cool.

For models that are trained and ready for deployment, the ONNX and the IBM Deep Learning Compiler documentation explains the process for LinuxONE, which includes using the IBM Deep Learning Compiler (DLC), which has been enhanced to target the Integrated Accelerator for AI. From there, you can use the your own Python and Java applications, which was great to see a whole repository of examples from IBM here: Using the IBM Z Deep Learning Compiler Container Images. You can also use the Triton Inference Server which the team has developed tooling for which allows the usage of ONNX-MLIR or zDLC compiled models. All of this software has been helpfully put together in the AI Toolkit for IBM Z and LinuxONE.

Thanks to Sunny Anand and team for this diagram and the latest updates about the various components

And more? There's a lot more, but the final thing I wanted to share is that the IBM Z Datathon team has worked with thousands of students around the world to walk them through fully open source AI/ML demonstrations deployed via Jupyter Lab and Python Machine Learning libraries, which you can try out today with our jupyter-lab-ml repostory on GitHub via Virtual Machines the no-cost IBM LinuxONE Community Cloud.

So, what else is inside these systems? There are a lot of PCIe cards, which sometimes puzzles people. Sure, maybe they're loaded up with those 48 Spyre cards, but what else? Hardware-wise, the LinuxONE itself is basically a beast of processing power, with various specialized cards to do specialized processing, from that Spyre card to the IBM Crypto Express HSM. And you need networking! And connection to storage! All of these are what those other cards handle for you and more, so when you peer into the back of your new LinuxONE 5, you may be welcomed to a dizzying array of PCIe cards doing all kinds of things.

 

Thanks to PJ Catalano for grabbing me a bunch of photos to share (I just chose two)!

For instance, while going through product photos (as you do) I learned that the below is a converged adapter which I asked Kenny Stine about last week, and he shared "For LinuxONE this will allow Linux to use the Networking Express adapters with NETH definitions to connect to a network."

Hopefully I've piqued your interest around the amazing hardware so you"ll know what to do with it the next time you encounter one in the vast rows of your favorite data center ("Lyz, people don't have favorite data centers." "They don't? I do. And I definitely have a LEAST favorite.").

0 comments
33 views

Permalink