The IBM Platform Common Services (also referred to in this article as the common services) are used within Cloud Paks to provide consistency across the IBM Cloud Paks, this post discusses how to manage the IBM Platform Common Services within OpenShift clusters where you want to control which nodes they run on, focusing on the IBM Cloud Pak for Integration (CP4I).
This post applies to any CP4I installation that was installed using the inception
style installer. You can tell if this method was used by checking that the common services pods (eg auth-idp
) are running in the kube-system
namespace – if they are then they were installed using the inception
style installer.
What are the IBM Platform Common Services?
The common services were originally part of the IBM Cloud Private Kubernetes distribution and make up the Value Add of using IBM Cloud Private vs a basic Kubernetes cluster. They provide enterprise quality, cluster wide services that have been hardened for security.
The services used by IBM Cloud Pak for Integration include:
- Single Sign-On
- Identity and Access Management (IAM)
- Helm
- Management UI for helm
- License advisor for tracking product usage
- An nginx ingress controller for applications
- Cross-cluster logging service (based on the ELK stack)
- Cross-cluster monitoring service (based on Prometheus and Grafana)
How do I run the common services on specific nodes?
Under IBM Cloud Private, the common services run on dedicated nodes. The same principles are used to run the common services on OpenShift. When installing the Cloud Pak, you can control which nodes are used for the services in the config.yaml
file:
cluster_nodes:
master:
- <your-openshift-node-to-deploy-master-components>
proxy:
- <your-openshift-node-to-deploy-proxy-components>
management:
- <your-openshift-node-to-deploy-management-components>
|
The services in each set are:
- Master: Single sign-on, IAM, Helm, Helm UI, License Advisor
- Proxy: Nginx ingress controller
- Management: Logging, Monitoring
The common services also run an instance of MongoDB as part of the master set, which is used for data persistence within many of the common services.
You can specify more than one node for each of the common services sets. By specifying more than one node for a set, the services in that set will run in an HA configuration:
- Some services will run a replica on each active node in the set using a daemonset (IAM, Helm UI, Nginx ingress controller)
- Some services will run one replica per node in the set using a deployment (MongoDB, Elasticsearch, Logstash, Platform API)
- Other services will run one replica and failover to an alternate node in the set if the scheduled node fails
Note: In CP4I 2019.4.1, a few of the common services are missing their node affinities. These will be corrected in a future release. Until then, there is a script on gist to add the node affinities that are missing. You can run this on any common services version installed using the inception
style installer, and running it more than once is fine too.
How do I make common services nodes dedicated to only common services workload?
The common services all run with a toleration for the NoSchedule
effect of the taint with a key of dedicated
and any value, making it simple to stop other workload from running on a node dedicated to common services.
You can apply the taint to the node with oc
or kubectl
:
kubectl taint nodes <node> dedicated=infra:NoSchedule
oc adm taint nodes <node> dedicated=infra:NoSchedule
Note: In CP4I 2019.4.1, a few of the common services are missing their tolerations. These will be corrected in a future release. Until then, there is a script on gist to add the node affinities that are missing. You can run this on any common services version installed using the inception
style installer, and running it more than once is fine too.