Maximo

Maximo

Come for answers, stay for best practices. All we're missing is you.

 View Only
  • 1.  Storage options for MAS - 9.0 ODF storage system

    Posted Sun September 14, 2025 12:47 AM

    Hi all,

    We're setting up Maximo Application Suite (MAS) on OpenShift and running into some questions around storage design. Our cluster currently has 3 master nodes and 2 worker nodes.

    From what I understand, OCS/ODF (Ceph-based) generally requires at least 3 worker nodes to support internal storage deployment (for quorum and high availability). Since we only have 2 worker nodes, I'm not sure if it's possible to proceed with OCS in a supported way.

    I'd like to get feedback from the community on:

    • Has anyone successfully deployed OCS/ODF on a 3 master + 2 worker setup?

    • If OCS is not an option, what alternative storage backends would you recommend for MAS in this scenario?

      • NFS provisioner?

      • Other CSI drivers that work well with MAS?

    • Are there any known workarounds (even if not supported for production) for running OCS with 2 workers for testing / dev environments?

    Any real-world experiences, recommendations, or pointers to best practices would be really helpful.

    Thanks in advance!



    ------------------------------
    Hariprasad R
    ------------------------------


  • 2.  RE: Storage options for MAS - 9.0 ODF storage system

    Posted Mon September 15, 2025 07:13 AM

    Yes. You are correct. OCS/ODF is a production-grade storage system. It will require 3 additional 'infra' nodes to build the cluster quorum with replication. 

    NFS is generally not feasible for production.

    Depending on your underlying platform, there are several other storage options. 



    ------------------------------
    Arif Ali
    ------------------------------



  • 3.  RE: Storage options for MAS - 9.0 ODF storage system

    Posted Tue September 16, 2025 03:57 AM

    Hi,

    You will not be able to install ODF on two nodes without some serious modifications to the product config. Of course those modifications would make it not supported at all.

    If You absolutely need to deploy some SDS in OpenShift on two (or even one) node then Rook Ceph is a solid candidate. It is an upstream open-source software which is used by ODF, but has way more config options. The only support is the community though.

    As Arif mentioned: NFS for production is not the best option. It is ok to use it for example for Manage attachments, but if You try to run DB2WH and MAS Monitor on NFS then it's going to be quite slow. One additional thing is that NFS usually is Single Point of Failure (SPOF). If that NFS server (or network to that server) is down then Your cluster does not have storage. For MongoDB (running in OCP) it would mean no data, and no data in MongoDB means no MAS user can log in, so basically MAS is unavailable.



    ------------------------------
    Witold Wierzchowski
    Solution Architect
    Cohesive Poland
    ------------------------------



  • 4.  RE: Storage options for MAS - 9.0 ODF storage system

    Posted Tue September 16, 2025 05:36 PM
    Hello,
     
    Coincidentally, I replied to a separate thread also started by Hariprasad stating approximately the same question. 
     
    I would say of course all the above comments from Arif and Witold are on point and the recommendation is to go with 3 dedicated "infra"/storage nodes and not even reuse your worker nodes as storage/ODF nodes as well.
     
    Repeating my previous post in the other thread below for convenience:
     
    As per https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/infrastructure-requirements_rhodf#minimum-deployment-resource-requirements (notice I'm using RH OCP v4.16 assuming that is near your version, as you didn't mention any specific version, however, ODF requirements are similar between recent versions) and there is recommended minimum of 3 storage nodes with a secondary disk on each to store the volume that is going to be created. 
     
    Also see https://cloud.ibm.com/docs/openshift?topic=openshift-ocs-storage-prep where it states that the replication factor determines the multiples of nodes allowed. It can be set to 2 for a new pool but only for the non-default RBD pool and it is certainly NOT recommended. See https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html-single/managing_and_allocating_storage_resources/index#creating-storage-classes-and-pools_rhodf and https://access.redhat.com/articles/6976064. I personally have no experience with this configuration and all ODF clusters we have created in our company have a minimum of 3 ODF nodes with an identically sized secondary disk. 
     
    Local Storage (ideally additional disks) should be fine to create the ODF volumes. Ensure the disk has no partitions on it before allowing ODF to use it for the volume.
     
    Also, check the networking requirements as there is a recommended network interface speed between the ODF nodes of 10Gpbs and also vCPU and RAM requirements per storage node. Also check if you want to label the nodes as "infra" nodes as that will be better for licensing than labelling them as just "worker" nodes. In any case, it is recommended to not allow any user workloads to run on the ODF nodes so they should be effectively dedicated to ODF work only. Wanted to mention all that just in case it helps.
     
    Thanks,


    ------------------------------
    Julio Perera
    Senior Maximo Technical Consultant
    Interloc Solutions Inc., US.
    ------------------------------