Hello,
 
Coincidentally, I replied to a separate thread also started by Hariprasad stating approximately the same question. 
 
I would say of course all the above comments from Arif and Witold are on point and the recommendation is to go with 3 dedicated "infra"/storage nodes and not even reuse your worker nodes as storage/ODF nodes as well.
 
Repeating my previous post in the other thread below for convenience:
 
 
 
Local Storage (ideally additional disks) should be fine to create the ODF volumes. Ensure the disk has no partitions on it before allowing ODF to use it for the volume.
 
Also, check the networking requirements as there is a recommended network interface speed between the ODF nodes of 10Gpbs and also vCPU and RAM requirements per storage node. Also check if you want to label the nodes as "infra" nodes as that will be better for licensing than labelling them as just "worker" nodes. In any case, it is recommended to not allow any user workloads to run on the ODF nodes so they should be effectively dedicated to ODF work only. Wanted to mention all that just in case it helps.
 
Thanks,
------------------------------
Julio Perera
Senior Maximo Technical Consultant
Interloc Solutions Inc., US.
------------------------------
                                                
					
                                                    
        
                                                
				
                                                
                                                Original Message:
Sent: Tue September 16, 2025 03:56 AM
From: Witold Wierzchowski
Subject:  Storage options for MAS - 9.0 ODF storage system
Hi,
You will not be able to install ODF on two nodes without some serious modifications to the product config. Of course those modifications would make it not supported at all.
If You absolutely need to deploy some SDS in OpenShift on two (or even one) node then Rook Ceph is a solid candidate. It is an upstream open-source software which is used by ODF, but has way more config options. The only support is the community though.
As Arif mentioned: NFS for production is not the best option. It is ok to use it for example for Manage attachments, but if You try to run DB2WH and MAS Monitor on NFS then it's going to be quite slow. One additional thing is that NFS usually is Single Point of Failure (SPOF). If that NFS server (or network to that server) is down then Your cluster does not have storage. For MongoDB (running in OCP) it would mean no data, and no data in MongoDB means no MAS user can log in, so basically MAS is unavailable.
------------------------------
Witold Wierzchowski
Solution Architect
Cohesive Poland
Original Message:
Sent: Sun September 14, 2025 12:46 AM
From: Hariprasad R
Subject: Storage options for MAS - 9.0 ODF storage system
Hi all,
We're setting up Maximo Application Suite (MAS) on OpenShift and running into some questions around storage design. Our cluster currently has 3 master nodes and 2 worker nodes.
From what I understand, OCS/ODF (Ceph-based) generally requires at least 3 worker nodes to support internal storage deployment (for quorum and high availability). Since we only have 2 worker nodes, I'm not sure if it's possible to proceed with OCS in a supported way.
I'd like to get feedback from the community on:
- Has anyone successfully deployed OCS/ODF on a 3 master + 2 worker setup? 
- If OCS is not an option, what alternative storage backends would you recommend for MAS in this scenario? 
- Are there any known workarounds (even if not supported for production) for running OCS with 2 workers for testing / dev environments? 
Any real-world experiences, recommendations, or pointers to best practices would be really helpful.
Thanks in advance!
------------------------------
Hariprasad R
------------------------------