IBM, in collaboration with NVIDIA, announces the completion of our testing, integration, and client deployment of NVIDIA DGX SuperPOD with IBM Storage. NVIDIA DGX SuperPOD Solution for Enterprise is a turn-key solution with IBM ESS 3200 storage. The all-NVMe IBM ESS 3200 delivers the performance, scalability, and flexibility of IBM Spectrum Scale – now integrated into the NVIDIA SuperPOD experience. The design delivers leadership-class AI infrastructure for those who want to focus on insights rather than platform complexity. NVIDIA DGX SuperPOD with IBM Storage can be deployed in scalable units of 20 to 140 nodes with systems, networks, software-frameworks, white-glove integration, and management with NVIDIA Base Command Manager.
As announced on June 28, IBM and NVIDIA developed the reference architecture solution at scale, running synthetic and real workloads. The hands-on approach to testing, integration, tuning, and optimization delivers tangible benefits when it comes to deploying in the field. Leveraging the experience, expertise, and services of IBM and NVIDIA, an NVIDIA DGX SuperPOD was deployed at a customer site in weeks.
“AI leaders around the world rely on NVIDIA DGX SuperPOD infrastructure to securely develop applications that boost manufacturing efficiency, support new discoveries in science and healthcare, and enhance customer experiences,” said Charlie Boyle, vice president and general manager of NVIDIA DGX systems. “The qualification of IBM Spectrum Scale for NVIDIA DGX SuperPOD and NVIDIA DGX systems provides customers with a storage option that aligns with the high performance required for advanced AI computing.”
IBM Spectrum Scale provides a parallel file system that can be deployed in scalable units that add performance and capacity as needed for demanding workloads and large data sets. Each ESS 3200 delivers 80GB/s of read bandwidth and support for NVIDIA Magnum IO GPUDirect Storage (GDS). The IBM Spectrum Scale software-defined storage reads and writes in parallel across multiple systems to linearly scale read and write bandwidth. The ESS 3200 is simple to deploy and develop, to deliver the recommended throughputs for workloads such as image classification and natural language processing. Advanced caching makes for efficient data re-read, or can be read directly into the GPU memory with low-latency, high-throughput GDS.
IBM storage also provides other documented and tested options for NVIDIA-accelerated computing. IBM Spectrum Storage reference architectures for two to eight NVIDIA DGX systems deploy rapidly and can get an organization started on a path to faster AI insights. NVIDIA GPUs are also available in IBM Spectrum Fusion HCI. Because all these are built with the flexibility of IBM Spectrum Scale, a unified global data platform can span edge, core, and cloud.
To learn more about optimizing data and matching storage to GPU-accelerated computing, please join us for a GTC session featuring Craig Tierney from NVIDIA and John Lewars from IBM. This 45-minute presentation covers best practices for data management and storage, from users’ first projects all the way to DGX SuperPOD:
Faster GPU Performance with Better Data Management (Presented by IBM) [A31713]
In addition, please join me and the IBM Spectrum Scale team at SC21. We will have both in-person and virtual meetings and events:
Leave your comments and questions below.