App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.

 View Only

Configuring topology spread constraints for App Connect Dashboard and integration runtime pods

By Shanna Xu posted 22 hours ago

  

Introduction

With the release of the IBM® App Connect Operator version 12.14.0, you can now configure topology spread constraints for your App Connect Dashboard and integration runtime pods.  As a result, you gain better control over the spread and distribution of those pods, when you have multiple instances of applications running across different topology domains (such as regions, zones and nodes).

This new feature is available on both Red Hat® OpenShift® Container Platform (OCP) and Kubernetes.

This article contains a tutorial on how to configure topology spread constraints for App Connect Dashboard and integration runtime pods.  There are two scenarios.  Scenario one covers how to create App Connect Dashboard with topology spread constraints on IBM Cloud Kubernetes Service (IKS), whilst scenario two covers configuring App Connect integration runtime on IKS. 

Prerequisites

  • Install the IBM® App Connect Operator version 12.14.0 or later
  • Use App Connect Dashboard and integration runtime versions 13.0.4.1-r1 or later
  • Kubernetes version 1.25, 1.27, 1.28 or 1.29
  • Configure a test cluster on IKS (Alternatively you can implement the scenarios in OCP or other Kubernetes platforms)
  • Install the kubectl command-line tool

Article index

Note: In this article, resource names are highlighted in dark red.  Keywords that are displayed on a UI are highlighted in bold.  The keywords project and namespace are used interchangeably.

Scenario 1: Create App Connect Dashboard with topology spread constraints on IKS

  1.  Copy the following YAML template into a file called dashboard_topology.yaml.
    apiVersion: appconnect.ibm.com/v1beta1
    kind: Dashboard
    metadata:
      name: example-topology-dashboard
      labels:
        backup.appconnect.ibm.com/component: dashboard
      namespace: ace
    spec:
      api:
        enabled: true
      authentication:
        integrationKeycloak:
          enabled: false
      authorization:
        integrationKeycloak:
          enabled: false
      displayMode: IntegrationRuntimes
      ingress:
        enabled: true
      license:
        accept: true
        license: L-KPRV-AUG9NC
        use: AppConnectEnterpriseProduction
      pod:
        containers:
          content-server:
            resources:
              limits:
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 50Mi
          control-ui:
            resources:
              limits:
                memory: 512Mi
              requests:
                cpu: 50m
                memory: 125Mi
        topologySpreadConstraint:
        - maxSkew: 1
          topologyKey: kubernetes.io/hostname
          whenUnsatisfiable: ScheduleAnyway
      replicas: 10
      storage:
        class: ibmc-file-gold-gid
        size: 5Gi
        type: persistent-claim
      version: '13.0.4.1-r1'	  
    					
    1. Notice that spec.replicas is set to 10, which means ten pods will be created for this Dashboard instance.
    2. Notice that spec.pod.topologySpreadConstraint contains an element of three fields.  This fields are required.  For a full list of topologySpreadConstraint fields, see the Kubernetes documentation.
      • maxSkew describes the degree to which Pods may be unevenly distributed.  In this example, we set spec.replicas to 10 and there are 3 nodes in our cluster.  It is not possible to evenly spread 10 pods across 3 nodes. As we set maxSkew to 1, which is the default value, the scheduler will put 4 pods in one node and 3 in the each of the remaining nodes.
      • topologyKey specifies the key of node labels.
      • whenUnsatisfiable indicates how to deal with a Pod if it doesn't satisfy the spread constraint.  In this case, the scheduler will still schedule it when maxSkew can not be satisfied.
    3. Notice that spec.version is set to 13.0.4.1-r1, which reconciles the Dashboard instance to the latest operand version for App Connect Operator 12.14.0.
  2. Follow the documentation on Dashboard storage to set storage.class.
  3. Now, create the Dashboard resource with the following command:
    kubectl apply -f dashboard_topology.yaml -n ace
  4. Wait for the Dashboard resource to become ready, and then verify that ten pods are running.  They are distributed correctly according to topologySpreadConstraints.  You can carry out the following actions:
    1. Check the status of the Dashboard resource with the following command:
      kubectl get dashboard -n ace
      The output provides details about the current status of the Dashboard resource.  When the instance is ready, you can see that STATUS is set to Ready.  Here is an example output:
      NAME                         RESOLVEDVERSION   REPLICAS   CUSTOMIMAGES   STATUS   UI URL                                                  API URL                                                 KEYCLOAK URL   AGE
      example-topology-dashboard   13.0.4.1-r1       10         false          Ready    https://example-topology-dashboard-ui-ace.example.com   https://example-topology-dashboard-api-ace.example.com                 96m
    2. Verify that there are ten pods running across three nodes in the cluster with the following command:
      kubectl get pod -o wide
      Here is an example output:
      NAME                                              READY   STATUS    RESTARTS   AGE   IP        NODE      NOMINATED NODE   READINESS GATES
      example-topology-dashboard-dash-864665f45-4rmqm   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-dashboard-dash-864665f45-7k4zn   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-dashboard-dash-864665f45-9kzkj   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-dashboard-dash-864665f45-c5dm8   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-dashboard-dash-864665f45-jj9qf   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-dashboard-dash-864665f45-jtcml   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-dashboard-dash-864665f45-k5kkl   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-dashboard-dash-864665f45-l5fct   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-dashboard-dash-864665f45-ldzq9   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-dashboard-dash-864665f45-nszh5   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>

Scenario 2: Create App Connect integration runtime with topology spread constraints on IKS

  1.  Copy the following YAML template into a file called ir_topology.yaml.
    apiVersion: appconnect.ibm.com/v1beta1
    kind: IntegrationRuntime
    metadata:
      name: example-topology-runtime
      labels:
        backup.appconnect.ibm.com/component: integrationruntime
      namespace: ace
    spec:
      defaultNetworkPolicy:
        enabled: true
      ingress:
        enabled: true
      license:
        accept: true
        license: L-KPRV-AUG9NC
        use: AppConnectEnterpriseProduction
      logFormat: basic
      metrics:
        disabled: true
      replicas: 10
      template:
        spec:
          containers:
          - name: runtime
            resources:
              requests:
                cpu: 300m
                memory: 368Mi
          topologySpreadConstraint:
          - maxSkew: 1
            topologyKey: kubernetes.io/hostname
            whenUnsatisfiable: ScheduleAnyway
      version: '13.0.4.1-r1'
    1. Notice that spec.replicas is set to 10, which means ten pods will be created for this integration runtime instance.
    2. Notice that spec.template.spec.topologySpreadConstraint contains an element of three fields.  This fields are required.  For a full list of topologySpreadConstraint fields, see the Kubernetes documentation.
      • maxSkew describes the degree to which Pods may be unevenly distributed.  In this example, we set spec.replicas to 10 and there are 3 nodes in our cluster.  It is not possible to evenly spread 10 pods across 3 nodes. As we set maxSkew to 1, which is the default value, the scheduler will put 4 pods in one node and 3 in the each of the remaining nodes.
      • topologyKey specifies the key of node labels.
      • whenUnsatisfiable indicates how to deal with a Pod if it doesn't satisfy the spread constraint.  In this case, the scheduler will still schedule it when maxSkew can not be satisfied.
    3. Notice that spec.version is set to 13.0.4.1-r1, which reconciles the integration runtime instance to the latest operand version for App Connect Operator 12.14.0.
  2. Now, create the integration runtime resource with the following command:
    kubectl apply -f ir_topology.yaml -n ace
  3. Wait for the integration runtime resource to become ready, and then verify that ten pods are running.  They are distributed correctly according to topologySpreadConstraints.  You can carry out the following actions:
    1. Check the status of the integration runtime resource with the following command:
      kubectl get ir -n ace
      The output provides details about the current status of the integration runtime resource.  When the instance is ready, you can see that STATUS is set to Ready.  Here is an example output:
      NAME                       RESOLVEDVERSION   STATUS   REPLICAS   AVAILABLEREPLICAS   URL                                                    AGE     CUSTOMIMAGES
      example-topology-runtime   13.0.4.1-r1       Ready    10         10                  http://example-topology-runtime-http-ace.example.com   1m      false
    2. Verify that there are ten pods running across three nodes in the cluster with the following command:
      kubectl get pod -o wide
      Here is an example output:
      NAME                                          READY   STATUS    RESTARTS   AGE   IP        NODE      NOMINATED NODE   READINESS GATES
      example-topology-runtime-ir-864665f45-4rmqm   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-runtime-ir-864665f45-7k4zn   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-runtime-ir-864665f45-9kzkj   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-runtime-ir-864665f45-c5dm8   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-runtime-ir-864665f45-jj9qf   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-runtime-ir-864665f45-jtcml   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-runtime-ir-864665f45-k5kkl   2/2     Running   0          1m    x.x.x.x   x.x.x.2   <none>           <none>
      example-topology-runtime-ir-864665f45-l5fct   2/2     Running   0          1m    x.x.x.x   x.x.x.1   <none>           <none>
      example-topology-runtime-ir-864665f45-ldzq9   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>
      example-topology-runtime-ir-864665f45-nszh5   2/2     Running   0          1m    x.x.x.x   x.x.x.3   <none>           <none>

Conclusion

With the release of the IBM® App Connect Operator version 12.14.0, you can configure topology spread constraints for App Connect Dashboard and integration runtime pods, to achieve better resource utilisation in your clusters and high availability for your applications.

0 comments
3 views

Permalink