API Connect

 View Only

High Availability across Availability Zones for IBM API Connect Gateways

By SZYMON STUPKIEWICZ posted Fri August 05, 2022 09:58 AM

  

When deploying containerized IBM API Connect Gateways in a cloud like AWS, Azure or GCP, one would like to make sure that Kubernetes scheduler tries to ensure resiliency for these containers across multiple . IBM API Connect Operator does not ensure that by default, but it can be instructed to do so.

In this blog post I will explain how to configure IBM API Connect Custom Resources to ensure that IBM API Connect Gateway Pods are spread across all possible Availability Zones.

Availability Zones, Node Affinities and Anti-affinities

Availability Zones (AZ) are logically or physically segregated locations where compute resources are placed. They are engineered to be isolated from failures in other AZs and provide low latency connectivity between them. They often translate to separate Data Centers. Hosting applications across multiple AZs enable resiliency to any failures local to single AZ. High Availability (HA) mechanisms in Kubernetes rely on concept of quorum, therefore it is desirable to have 3 AZs to host highly resilient applications, and ensure that Kubernetes or RedHat Openshift clusters deployed in a cloud span all possible AZs.

In Kubernetes and RedHat Openshift Node affinity and Node anti-affinity allow you to specify on which nodes your Pods are being scheduled. Affinities and anti-affinities can be either soft, or hard.

Hard node affinity (requiredDuringSchedulingIgnoredDuringExecution) will prioritize HA rules over capacity/uptime. In real life deployments soft node affinity (preferredDuringSchedulingIgnoredDuringExecution) is more desirable.

Soft node affinity will prioritize capacity and uptime over HA rules. To better capture the difference between hard and soft affinities, let’s consider following situation:

Cluster to consider for soft/hard affinities explanation

In above scenario when Node 1 fails and application Pods are using hard node affinity, Kubernetes scheduler will NOT schedule failed pods in Node 2 or Node 3, as this will break HA Rules. This effectively prevents running more than one replica of the same application on one node.

However, when using soft node affinity, provided that there’s sufficient resources to run failed pod on other nodes – Kubernetes scheduler will do so. The downside with soft affinities is that the placement is not guaranteed. Kubernetes will take other aspects, like available CPU and memory, into consideration when scheduling pods.

To learn more about Node Affinities and Anti-Affinities, please refer to Kubernetes Documentation.

The problem

When deploying IBM API Connect Gateways by default IBM API Connect Operator is applying soft anti-affinity on a node level, that would most likely result in following scenario when using as many nodes as AZs:

Simple deployment of IBM API Connect Gateway Service
However, often Kubernetes and RedHat Openshift clusters are configured with more nodes than AZs hosting a lot of other pods than just the ones for IBM API Connect. In this scenario, the use of the default settings can result in the following:

Undesired Gateway Pod Placement

This obviously introduces a risk for the availability of the IBM API Connect Gateway pods. When there’s an outage of AZ A, gateway pod in AZ B will not be in quorum, therefore it will run in read-only mode. This means that it will not perform OAuth token management, API rate limiting will not be enforced and API publication and other configuration changes won’t be possible.

To learn more about HA of DataPower gateways, please refer to IBM API Connect v10 Deployment Whitepaper

The Recipe

To avoid behavior described above, and to direct Kubernetes scheduler to try and schedule pods across AZs, container overrides need to be applied in specification of IBM API Connect Gateway Custom Resource.

First though you need to label your nodes with a topology key that will be used by the scheduler to distribute the pods. Usually for managed Kubernetes offerings these labels are already in place like topology.kubernetes.io/zone that is used in sample below.

Then you can set up container overrides by using template field in spec section of your GatewayCluster manifest in the following fashion:

apiVersion: gateway.apiconnect.ibm.com/v1beta1
kind: GatewayCluster
metadata:
labels:
app.kubernetes.io/instance: gateway
app.kubernetes.io/managed-by: ibm-apiconnect
app.kubernetes.io/name: gwv6
name: gwv6
namespace: gateway-prod
spec:
...
template:
- name: datapower
containers:
- name: datapower
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- podAffinityTerm:
labelSelector:
matchLabels:
crd.apiconnect.ibm.com/instance: gwv6
crd.apiconnect.ibm.com/kind: datapower
topologyKey: topology.kubernetes.io/zone
weight: 100
...

IBM API Connect and DataPower operators will pass this section to statefulset definition for DataPower. In most cases this results in desired situation where IBM API Connect Gateway pods are scheduled in all possible AZs as depicted below:

Desired situation


The above example is applicable to standalone IBM API Connect installation. When IBM API Connect is installed as part of IBM CloudPak for Integration, names of microservices are different. Please consult IBM API Connect Documentation for more details.

Other IBM API Connect subsystems

Technique described above can also be applied to other IBM API Connect subsystems – Management, Analytics and Portal. Here is the list of microservices that accepts container overrides in subsystem Custom Resource specification:
Analytics subsystem:
  • ingestion
  • storage
  • director
  • mtls-gw
Management subsystem:
  • apim
  • taskmanager
  • ldap
  • lur
  • ui
  • client-downloads-server
  • analytics-proxy
  • portal-proxy
Portal subsystem:
  • db
  • www
  • nginx
When looking at the list above, please notice that not all microservices in Management subsystem accept overrides.

My personal recommendation for deploying IBM API Connect in a cloud is to apply affinity overrides to IBM API Connect Gateway pods and only consider them for Analytics and Portal depending on your HA requirements for these subsystems.

Please note, that it is recommended that all overrides are removed when upgrading. Please see IBM API Connect Upgrade Documentation point 1d.







0 comments
129 views

Permalink