IBM Z and LinuxONE IBM Z

 View Only

 IBM Cloud PAK running on openshift based on Zvm

Mohammed Ibrahem's profile image
Mohammed Ibrahem posted Fri August 15, 2025 12:05 PM

Dears

can you help in estimate the number of IFL needed to run the below products under Zvm

1 Openshift

2 Redhat ODF

3 IBM Cloud pak for integration

as per Redhat document the minimum to run Openshift and ODF is  IFL but no information about about cloud PAK 

thanks

Mohammed Ibrahem

Natalie Carrillo's profile image
Natalie Carrillo

Hi Mohammed,

Your technical sales contact will be able to assist you.

Kurt Acker's profile image
Kurt Acker IBM Champion

The question is really dependent on the workloads (size/amount of data, and how active they are) you plan to run in each environment.  

Unless of course you are just asking for all the stand alone overhead required just to run the base of these products.  

  • That might be answered in part by the number of IFL's they give you to run these platforms.  
  • Please of course pay attention to your TCO with these extra engines... 

Now, if you want to see what each container if really doing/using, Velocity Software can measure OpenShift in the most efficient way possible.  We use less than 0.1% of a processor no matter how large your environment is, and do not require extra licenses for Operating Systems (RHEL, SuSE, Alma etc), no SMAPI, no JAVA, no additional databases required all while also using only a fraction of the memory needed compared to other solutions in this space as well.  With our Virtual Resource Manager (zVRM), we can even automagically tune virtual CPU's and memory for workloads that will tolerate it.   We also offer a full cloud front end to z/VM with our zPRO offering to help reduce the complexities of it, while providing the Linux teams with the visualization they need to help move everyone forward.  Take a test drive by selecting demo from our main site or of course feel free to just contact us directly at anytime:

https://velocitysoftware.com/ 

Abhishek Anand's profile image
Abhishek Anand

If the workload is new and we don’t know its size, the best approach is to start with a safe baseline and then scale as we learn. Typically, we begin with:

  • 3 master nodes (small CPUs, just for control)

  • A couple of infra nodes for logging/monitoring

  • A few worker nodes (4–6 CPUs each) to host applications

  • Storage (ODF) can start co-located, and move to 3 dedicated nodes if data or I/O grows

From this, we estimate the total vCPUs, add a little extra (10–15% for z/VM overhead and 20% for growth), and then divide by the overcommit ratio (e.g., 6:1) to get the IFL count.

In practice, this usually means starting with 4–6 IFLs for a pilot, 12–16 IFLs for small production, and 20+ IFLs for larger environments.
Once it’s running, we monitor CPU, memory, and storage usage for a couple of weeks, and then adjust up or down as needed.