As stated by Witold, some MAS applications use additional storage and some do not. Also, the required dependencies will need additional storage as well, these are not directly required by the MAS application but by the dependency, so the storage requirement is indirect. Those are typically, MongoDB, Kafka (if needed), the OpenShift Image Registry, DRO, any DB2 databases and especially CP4D (if used or needed).
Regarding MAS applications themselves and according to our demo systems (only the main ones, the list is not meant to be exhaustive):
1. MAS Core does not typically need additional shared storage
2. MAS Manage is configurable, but typically needed for DOCLINKS, Logs, Integration Files, etc. (depending on case)
3. MAS Assist uses CouchDB and Redis and they require shared storage
4. MAS IoT and Monitor does not typically need additional shared storage
5. MAS Predict does not typically need additional shared storage
6. MAS Health (typically bundled with the Manage deployment) does not typically need additional shared storage
7. MAS AIbroker requires shared storage for the KM Controller plus MariaDB and Minio/S3 (if not reusing them from somewhere else).
As mentioned above Cloud Pak for Data is a world on itself and it has specific storage requirements for performance and compatibility, so leaving that out of consideration in this response. See https://www.ibm.com/docs/en/software-hub/5.2.x?topic=requirements-storage for the requirements related to the latest version.
Hope the above helps
Interloc Solutions Inc., US.
Original Message:
Sent: Fri July 25, 2025 03:15 AM
From: Abhinav Bhuria
Subject: Query regarding MAS installation on GCP cloud
Hi @Witold Wierzchowski,
Again thanks for taking your time and explaining this. I get clear idea regarding ES and persistent storage now. My follow up question would be this though, Maximo MAS installation is fully capable of running on the cluster ES and does not require a PV to be bound? Only time we shall be using PV would be maybe doclinks (unless we are using cloud storage), file based storage for JMS Queues an other components as and when required, but Maximo MAS/Manage on it's own if fully capable of running entirely on ES and does not require any PV to be bound during installation
Apologies if this seems like a silly or repetitive question - I've already asked quite a few, but I just want to be sure I'm not missing anything.
------------------------------
Abhinav Bhuria
Original Message:
Sent: Fri July 25, 2025 02:55 AM
From: Witold Wierzchowski
Subject: Query regarding MAS installation on GCP cloud
Hi Abhinav,
i think there is some confusion on how the storage is handled by OpenShift here :) note that my explanation below has some simplifications and assumptions that OpenShift is deployed in public cloud with cloud provider API integration, but the principal holds.
When You deploy cluster (or add nodes) You will assign some disk to the node. That disk will be used to install CoreOS (which is the operating system underlying OpenShift) and also that disk will be used to store downloaded container images and for ephemeral storage for pods. You do not have to manually allocate disk for that - it is all done by the OpenShift.
Persistent storage on the other hand is NOT using the same disk as the CoreOS. When You deploy an app which requires persistent storage it will create a PVC (or sometimes You have to do it manually). If You have a StorageClass defined with dynamic provisioning, then based on the PVC OpenShift will create a PV and reach out to the cloud provider API to create another disk. That disk then is used for the PVC and for the persistent storage. This is all done automatically by OpenShift.
------------------------------
Witold Wierzchowski
Solution Architect
Cohesive Poland
Original Message:
Sent: Thu July 17, 2025 03:10 AM
From: Abhinav Bhuria
Subject: Query regarding MAS installation on GCP cloud
@Witold Wierzchowski Thanks again
Just a follow up question. The components MongoDB, DRO (which replaced UDS some time ago), Grafana and Prometheus, do they require persistent storage to be allocated for them (separate from the ephemeral storage allocated for the master and worker nodes)?. Also is there a list of components in MAS which require persistent storage to be allocated (separate from ephemeral storage)?
------------------------------
Abhinav Bhuria
Original Message:
Sent: Thu July 03, 2025 03:14 PM
From: Witold Wierzchowski
Subject: Query regarding MAS installation on GCP cloud
Hi,
MongoDB, DRO (which replaced UDS some time ago), Grafana and Prometheus are fine using RWO, even in non-SNO clusters. SLS does not need storage class at all.
Cheers,
------------------------------
Witold Wierzchowski
Solution Architect
Cohesive Poland
Original Message:
Sent: Thu July 03, 2025 02:48 PM
From: Abhinav Bhuria
Subject: Query regarding MAS installation on GCP cloud
Ivan and Witold thank you so much for your inputs.
@Witold Wierzchowski You mentioned that since my DB will reside outside and attachments are being handles in GCS I may not require RWX capable storage except for the case of jms related filestore or anything which needs to be shared by the worker nodes pods. However I was reading below article
https://cloud.redhat.com/experts/gcp/mas/
which is MAS installation on managed Openshift service by GCP where they seem to have provisioned a filestore storage class and using it for UDS_STORAGE_CLASS, DRO_STORAGE_CLASS and couple more which are internal components of MAS. This had me worried because I was not expecting to see RWX storage to be a pre-requisite for MAS installation. As you also mentioned that in my case I most probably don't need it but the article above says otherwise.
Maybe I have drawn wrong conclusions from the same, would really like to know your thoughts on this
Thanks Again for your time and inputs
Abhinav
------------------------------
Abhinav Bhuria
Original Message:
Sent: Thu July 03, 2025 02:09 AM
From: Witold Wierzchowski
Subject: Query regarding MAS installation on GCP cloud
Hi Abhinav,
seems like You are confusing primary disk (with ephemeral storage and CoreOS) with persistent storage (used by pods). Primary disks are fine using any kind of block storage, so standard-pd should be fine. Just keep in mind that master nodes require very fast disks to operate properly. See: https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/scalability_and_performance/recommended-performance-and-scalability-practices-2#recommended-etcd-practices
As for the persistent storage: GCP has native Filestore driver which You can use for RWX capable storage. If You use a CSI driver and a StorageClass then PVCs will create a share (using Filestore driver) in GCP and bind it to the PVC dynamically. No need to create those disks upfront. See the table here: https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/storage/understanding-persistent-storage#pv-access-modes_understanding-persistent-storage
In Your case with DB outside of cluster and attachments in Obejct Storage You might not need RWX capable storage, but that would depend on the actual configuration of MAS apps. For example any PVC used by Manage (maybe for migration manager or jms backing store) should be RWX (in case of non SNO setup) as it can be mounted by all ServerBundle pods and maxinst pod, which may run in different nodes.
------------------------------
Witold Wierzchowski
Solution Architect
Cohesive Poland
Original Message:
Sent: Wed July 02, 2025 06:33 PM
From: Abhinav Bhuria
Subject: Query regarding MAS installation on GCP cloud
Hi All, we are planning to install an self-managed OCP cluster on GCP cloud. Had couple of questions regarding the same
1) Does manage support GCP cloud storage for doclinks. GCP does offer a S3 compatiable API just wanted to confirm if someone has configured the same for doclinks using the system properties mxe.cosaccesskey, mxe.cossecretkey etc
2) Also while sizing the infra, the calculator recommended 3 master and 3 worker nodes wherein each worker is suggested to have 400GB of storage. We are currently planning to go with standard-pd (an ssd) which will be mounted to each worker node. Checking online we could see that a storage type of RWX (read write many) is required for mas installation in a cluster and standard-pd does not support RWX, can anyone confirm on this whether worker node storage itself needs to be changed to a RWX compatible storage? Or if the nodes need additional RWX storage to be provisioned? Just to add the database will reside outside the cluster and doclinks will be handled in GCP object storage preferably so not sure which components in the cluster itself require RWX storage?
@Ivan Lagunov would appreciate your inputs here
Looking forward to your insights and recommendations, especially around the doclinks configuration and storage setup. Any guidance based on prior implementations or best practices would be highly appreciated. Thanks in advance
------------------------------
Abhinav Bhuria
------------------------------