Maximo

Maximo

Come for answers, stay for best practices. All we're missing is you.

 View Only
  • 1.  ODF StorageSystem Creation Stuck at Capacity and Nodes Stage with LocalVolume SC

    Posted Sat September 13, 2025 02:08 AM

    Issue:
    While creating a StorageSystem using ODF in the MAS cluster, the wizard is stuck at "Capacity and nodes" stage for over 1.5 hours. We selected an existing localvolume storage class created from Local Storage Operator.

    Environment:

    3 master nodes, 2 worker nodes

    LocalVolume created and available

    ODF operators installed (odf-operator, rook-ceph-operator, ocs-operator running)

    Request:
    Please confirm:

    Whether ODF can be configured with only 2 worker nodes.

    Whether localvolume SC from Local Storage Operator is supported as backing storage.

    What additional configuration is required to proceed.



    ------------------------------
    Hariprasad R
    ------------------------------


  • 2.  RE: ODF StorageSystem Creation Stuck at Capacity and Nodes Stage with LocalVolume SC

    Posted Tue September 16, 2025 05:27 PM

    Hi Hariprasad:

    As per https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/infrastructure-requirements_rhodf#minimum-deployment-resource-requirements (notice I'm using RH OCP v4.16 assuming that is near your version, as you didn't mention any specific version, however, ODF requirements are similar between recent versions) and there is recommended minimum of 3 storage nodes with a secondary disk on each to store the volume that is going to be created. 
    Also see https://cloud.ibm.com/docs/openshift?topic=openshift-ocs-storage-prep where it states that the replication factor determines the multiples of nodes allowed. It can be set to 2 for a new pool but only for the non-default RBD pool and it is certainly NOT recommended. See https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.14/html-single/managing_and_allocating_storage_resources/index#creating-storage-classes-and-pools_rhodf and https://access.redhat.com/articles/6976064. I personally have no experience with this configuration and all ODF clusters we have created in our company have a minimum of 3 ODF nodes with an identically sized secondary disk. 
    Local Storage (ideally additional disks) should be fine to create the ODF volumes. Ensure the disk has no partitions on it before allowing ODF to use it for the volume.
    Also, check the networking requirements as there is a recommended network interface speed between the ODF nodes of 10Gpbs and also vCPU and RAM requirements per storage node. Also check if you want to label the nodes as "infra" nodes as that will be better for licensing than labelling them as just "worker" nodes. In any case, it is recommended to not allow any user workloads to run on the ODF nodes so they should be effectively dedicated to ODF work only. Wanted to mention all that just in case it helps.

    Thanks



    ------------------------------
    Julio Perera
    Senior Maximo Technical Consultant
    Interloc Solutions Inc., US.
    ------------------------------