Overview
IBM Cloud Pak for Business Automation (CP4BA) is an OpenShift-based platform that enables organizations to digitize and automate business operations, improve productivity, and scale operations efficiently. It integrates core capabilities across content, workflow, decisions, and data capture, all deployed as containerized components managed through OpenShift. Because a single OpenShift cluster often supports multiple organizations and environments—spanning development, testing, and sometimes even production—careful control over the placement of CP4BA components becomes critical. Leveraging the proper placement techniques can help customers optimize resource utilization, manage high availability and workload isolation requirements and even ensure access to specialized hardware such as GPUs when necessary. This article examines the use of OpenShift placement mechanisms—including node selectors, taints, and tolerations—and demonstrates how they can be applied to control where pods are scheduled within a cluster to meet both placement and isolation requirements for CP4BA components. 
For clarity, we will use the deployment scenario illustrated in Figure 1 as the reference throughout this article. In this setup, a single OpenShift cluster is shared by multiple organizations. Two worker nodes are allocated to the Development organization, three worker nodes are dedicated to the Test organization, and one additional node is available for general-purpose workloads. The Development organization must deploy CP4BA across its two assigned nodes, but these nodes must also accommodate other non-CP4BA applications, resulting in a mixed workload environment. In contrast, the Test organization must deploy CP4BA on its three dedicated nodes but requires those nodes to remain isolated, ensuring they host only CP4BA pods and no other applications that could consume resources. In this article we will explore how to achieve both, how to ensure the placement of  CP4BA pods on specific worker nodes, as required by the Development organization, and how to ensure placement and isolation of CP4BA deployments from other workloads, as required by the Test organization. 
 Figure 1
Figure 1
Before diving into the implementation of the scenario described earlier, it is important to review three core OpenShift concepts that enable pod placement and isolation: node selectors, taints, and tolerations.
Node Selectors
Node selectors are the simplest mechanism for controlling pod placement in OpenShift. They allow cluster administrators to schedule pods only on specific worker nodes by applying scheduling constraints to the scheduler. Using node selectors is a two-step process:
    1.    Apply one or more labels to the target worker nodes.
    2.    Define a nodeSelector to the proper OpenShift resource (e.g., pod, deployment)  that matches those labels.
During scheduling, the scheduler performs an exact logical AND match against all labels specified in the nodeSelector. A pod is scheduled only if the target node contains every required label. If no nodes match the criteria, the pod remains in the Pending state until an appropriate node becomes available.
Although OpenShift also provides more advanced mechanisms (e.g., node affinity) for managing pod placement, this article will focus on node selectors as the most straightforward option.
Taints
Taints address the complementary problem of pod isolation by repelling pods from certain nodes. A taint is applied at the node level and prevents pods from being scheduled unless they include a matching toleration. A taint consists of a key, value, and effect, which define the conditions under which pods are repelled.
In this article, we will focus on the NoExecute effect, the most restrictive option. It not only blocks new pods from being scheduled on the tainted node but also evicts any existing pods that do not have the necessary toleration. Taints take effect immediately once applied to a node.
Tolerations
Tolerations are the counterpart to taints. They allow administrators to define exceptions to a taint’s repellent behavior so that specific pods can still be scheduled on tainted nodes. Tolerations are applied to the proper OpenShift resource (e.g., pod, deployment).
By themselves, tolerations do not influence pod scheduling. Instead, they simply prevent the scheduler from excluding a pod due to a taint. This distinction is important: unlike node selectors, tolerations do not guarantee that a pod will be placed on a specific node — they only permit scheduling on a tainted node if the scheduler selects it.
Namespace-Scoped Placement and Isolation 
Placement and isolation controls in OpenShift are often discussed in the context of pods or higher-level resources such as Deployments and StatefulSets. In the case of Cloud Pak for Business Automation (CP4BA), however, a different approach is more practical. A CP4BA installation typically includes hundreds of pods and dozens of Deployments, StatefulSets, and Jobs. These resources are created and managed by various operators configured during the installation process.
While some operators expose customization options that allow specific CP4BA components to be configured with attributes like node selectors or tolerations, not all do. As a result, it is not possible to define placement and isolation policies consistently across every pod by configuring individual components.
To address this, we can instead leverage namespace-level configuration. When node selectors and tolerations are defined at the namespace level, they are automatically inherited by all pods created within that namespace without the need to explicitly set these values at each pod definition. This provides a simple and effective mechanism for applying placement and isolation policies to every pod in a CP4BA installation, without requiring per-component customization.
However, this approach has limitations. Namespace-scoped settings apply uniformly to all resources in the namespace, meaning you cannot fine-tune placement and isolation for individual components of CP4BA. If component-level granularity is required, you must use the native placement configuration options provided within CP4BA itself. We will revisit this later in the article.
Using Node Selectors to Deploy CP4BA on Targeted Nodes
As described in the scenario, the Development organization wants to ensure that CP4BA pods are deployed only on their assigned nodes. Specifically, they want all pods from the installation in the cp4ba-dev namespace to run on worker-1-dev and worker-2-dev. They do not require exclusive use of these nodes, meaning they are comfortable with other workloads being scheduled there as well. In this case, node selectors are sufficient to achieve the desired placement.
Steps to configure this:
- Label the target nodes 
 Assign a label to the nodes where the Development organization’s pods should run. In this example, we use the key org with the value dev.
 
 oc label nodes worker-1-dev org=dev
 oc label nodes worker-2-dev org=dev
 
 If your OpenShift nodes are managed by a MachineSet, you can instead add the label at the MachineSet level, which ensures that any new nodes created by that MachineSet inherit the label:
 
 oc patch MachineSet <dev-machine-set-name>  --type='json' -p='[{"op":"add","path":"/spec/template/spec/metadata/labels", "value":{"org"="dev"}}]'  -n openshift-machine-api
 
 Once modified, OpenShift will propagate the label to all nodes managed by the MachineSet.
 
 Note: You can add multiple labels if needed. For simplicity, we use a single label (org=dev) to identify Development organization nodes.
 
 Verify that the label has been applied correctly:
 oc get nodes -l org=dev
 
 
- Annotate the namespace with the node selector
 Add a node selector to thecp4ba-devnamespace so that all pods in the namespace are scheduled on nodes with theorg=devlabel.
 
 oc patch namespace cp4ba-dev -p '{"metadata":{"annotations":{"openshift.io/node-selector":"org=dev"}}}' --type=merge
 
 For individual Deployments, Pods, or other resources, node selectors are specified using the nodeSelector field.  At the namespace level, OpenShift uses the openshift.io/node-selector annotation instead.
 
 If you want to specify multiple key-value pairs, use a comma-separated list. For example:
 "org=dev,region=us-east"
After labeling the nodes and annotating the namespace, newly created pods in the cp4ba-dev namespace will be scheduled only on nodes that meet the node selector requirements.
Important: Existing pods are not automatically moved. To apply the new placement rules, you must either delete the running pods or scale down and then scale up the Deployments so that pods are rescheduled onto the correct nodes.
Using Taints and Tolerations to Deploy CP4BA on Isolated Nodes
Continuing with our scenario, the Test organization wants to ensure that CP4BA pods are deployed only to the nodes assigned to them: worker-4-test, worker-5-test, and worker-6-test. They also want to prevent workloads from other deployments (such as those in the other-apps-ns namespace) from being scheduled on these nodes.
To achieve this, we need to combine three mechanisms:
    •    Node selectors, to ensure pods are scheduled only on the intended nodes.
    •    Taints, to repel unwanted pods from these nodes.
    •    Tolerations, to allow exceptions for CP4BA pods under the cp4ba-test namespace.
It’s important to note that taints and tolerations control admission (repelling or permitting pods), while node selectors control placement (directing pods to the right nodes). Both are required to achieve proper isolation.
Steps to configure this:
- Label the target nodes
 Assign a label to the nodes where the Test organization’s pods should run. In this example, we use the key org with the value test.
 oc label nodes worker-4-test org=test
 oc label nodes worker-5-test org=test
 oc label nodes worker-6-test org=test
 
 
- Annotate the namespace with the node selector
 Add a node selector to thecp4ba-testnamespace so that all pods in the namespace are scheduled on nodes with theorg=testlabel.
 
 oc patch namespace cp4ba-test -p '{"metadata":{"annotations":{"openshift.io/node-selector":"org=test"}}}' --type=merge
 
 
- Annotate the namespace with tolerations
 Add a toleration to thecp4ba-testnamespace so that pods in this namespace can run on tainted nodes reserved for CP4BA. In this example, we usepurpose=cp4ba-onlywith the NoExecute effect.
 
 oc patch namespace cp4ba-test -p '{"metadata":{"annotations":{"scheduler.alpha.kubernetes.io/defaultTolerations":"[{\"operator\":\"Equal\",\"key\":\"purpose\",\"value\":\"cp4ba-only\",\"effect\":\"NoExecute\"}]"}}}'
 
 
- Add taints to the target nodes
 Apply a taint to the Test organization’s nodes to prevent scheduling of pods unless they have the matching toleration.
 
 oc adm taint nodes worker-4-test purpose=cp4ba-only:NoExecute
 oc adm taint nodes worker-5-test purpose=cp4ba-only:NoExecute
 oc adm taint nodes worker-6-test purpose=cp4ba-only:NoExecute
 
 If your OpenShift nodes are managed by a MachineSet, you can instead add the taint at the MachineSet level. This ensures that any new nodes created by that MachineSet inherit the taint automatically:
 
 oc patch machineset <machineset-name> --type='json'   -p='[{"op":"add", "path":"/spec/template/spec/taints", "value":[{"effect":"NoExecute", "key":"purpose", "value":"cp4ba-only"}]}]'  -n openshift-machine-api
 
 Note: The taint applied to the nodes must exactly match the toleration configured in the namespace, including the effect.
Important: Once the NoExecute taint is applied to a node, all pods without the matching toleration will be evicted. This is the strictest taint effect, as it both prevents new pods from being scheduled and removes existing non-tolerating pods.
CP4BA Support for Placement and Isolation
CP4BA does not currently support per-component placement and isolation settings across all components. However, some components do support tolerations, affinity, and node selectors through the custom resource (CR) used for CP4BA installations. This provides the ability to control pod scheduling with more granularity. The following table lists the components that support at least one of these settings, along with links to their configuration details.
Conclusion
It is possible to manage the placement and isolation of CP4BA installations by leveraging namespace-scoped node selectors and tolerations in OpenShift. This approach provides a straightforward and effective way to enforce scheduling requirements without directly modifying operator-managed resources such as pods, and it applies to the installation as a whole. For more granular control, administrators can take advantage of component-specific configuration options exposed through the CP4BA custom resource, where supported.
When implementing these mechanisms, consistency is key. Labels, taints, and tolerations should be planned carefully across namespaces, nodes, and MachineSets to avoid situations where pods become un-schedulable or are unintentionally evicted. Administrators should also select taint effects (NoSchedule, PreferNoSchedule, or NoExecute) that best align with the desired isolation policy, balancing strict workload separation with cluster availability.
Finally, if nodes are provisioned through MachineSets, applying labels and taints at the MachineSet level ensures that newly created nodes automatically inherit the intended configuration. This practice reduces administrative overhead and helps maintain isolation and placement policies over time.
By combining namespace-level scheduling controls with component-specific settings where available, organizations can achieve both coarse-grained and fine-grained workload placement for CP4BA, improving resource utilization, workload predictability, and operational resilience.
References
OpenShift Container Platform: Controlling Pod Placement Onto Nodes