Cloud Platform as a Service

Cloud Platform as a Service

Join us to learn more from a community of collaborative experts and IBM Cloud product users to share advice and best practices with peers and stay up to date regarding product enhancements, regional user group meetings, webinars, how-to blogs, and other helpful materials.

 View Only

OpenShift AI version 2.19 now available on IBM Cloud RHOAI

By Sugandha Agrawal posted yesterday

  

We are excited to announce the latest stable version release of Red Hat OpenShift AI (2.19) is now available on IBM Cloud for use. This version is available with both Add-on and Deployable Architecture.

Red Hat OpenShift AI on IBM Cloud

Red Hat OpenShift AI is a flexible, scalable artificial intelligence (AI) and machine learning (ML) developer-focused platform that enables creation and delivery of AI-enabled applications at scale. Built using open-source technologies, Red Hat OpenShift AI provides trusted, operationally consistent capabilities for teams to experiment, serve models, and deliver innovative applications.

Red Hat OpenShift AI on IBM Cloud supports accelerated AI/ML development with pre-built tools and workflows like TensorFlow, PyTorch etc. along with supporting bring your own (BYO) images to support both new and experienced AI developers to get started with their AI journey.

Red Hat OpenShift AI enables data acquisition and preparation, model training and fine-tuning, model serving and model monitoring, and hardware acceleration. 


Version Details

Component

Version

Red Hat OpenShift AI

2.19

Red Hat OpenShift AI on IBM Cloud (Add-on)

416

Red Hat OpenShift on IBM Cloud

4.16 and 4.17

This version unlocks new features and enhancements to support Gen AI capabilities for our customers. 

Red Hat OpenShift AI Features

Following are the features introduced with 2.19

  1. GenAI capabilities - Guardrails Orchestrator Framework (TrustyAI service) is now available to add safety and policy checks to LLMs allowing users to define detectors to filter LLM input and output.
  2. Users can view the list of all the installed components and versions from Help>About menu in the Red Hat OpenShift AI dashboard.
  3. Distributed PyTorch training across multiple nodes and GPUs using the Kubeflow Training Operator is now supported.
  4. NVIDIA GPUDirect RDMA, which uses Remote Direct Memory Access (RDMA) to provide direct GPU interconnect, is now supported for distributed model training with KFTO

Detailed version can be found here

Red Hat OpenShift AI Enhancements

Following are the enhancements introduced with 2.19

  1.  Users can deploy models in either advanced or standard deployment mode. Standard deployment mode uses KServe RawDeployment mode and does not require the Red Hat OpenShift Serverless Operator, Red Hat OpenShift Service Mesh, or Authorino.
  2. The OpenVINO Model Server has been upgraded to version 2025.0. For information on the changes and enhancements, see OpenVINO™ Model Server 2025.0.
  3. Data Science Pipelines have been upgraded to Kubeflow Pipelines (KFP) version 2.4.0. For more information, see the Kubeflow Pipelines documentation.

Detailed version can be found here.

Checkout our RHOAI on IBM Cloud install video here and try out Red Hat OpenShift on IBM Cloud to get started by creating a cluster with OpenShift AI add-on. The minimum requirement on the cluster are documented here.

0 comments
6 views

Permalink