App Connect

App Connect

Join this online user group to communicate across IBM product users and experts by sharing advice and best practices with peers and staying up to date regarding product enhancements.


#Applicationintegration
#App Connect
#AppConnect
 View Only

Dynamic CPU Allocation for Faster Startup in IntegrationRuntimes

By Rob Convery posted Thu October 30, 2025 05:20 AM

  

Dynamic CPU Allocation for Faster Startup in App Connect Operator 12.17.0

In App Connect Operator version 12.17.0, we’ve introduced a powerful new capability that optimizes container startup performance without increasing licensing costs. This feature allows you to allocate higher CPU resources during the container startup phase and then dynamically reduce them to lower values for the running (licensed) phase.

Why This Matters

Starting an IntegrationRuntime involves several CPU-intensive tasks:

  • Processing configuration data.
  • Launching the App Connect Enterprise (ACE) process.
  • Initializing multiple runtimes (C, Java, NodeJS).
  • Starting flows.

These operations demand significant CPU power during startup. However, once the runtime stabilizes, CPU usage drops dramatically. This mismatch creates a challenge:

  • High CPU allocation → Fast startup, but higher VPC license costs for unused resources.
  • Low CPU allocation → Lower license costs, but slow startup (sometimes several minutes).

The Solution: Dynamic CPU Scaling

This new capability leverages Kubernetes In-Place Pod Resize, a feature that graduated to BETA in Kubernetes 1.33.0 and is enabled by default on most Kubernetes distributions, including major SaaS providers like IBM Cloud, Azure, and AWS. A BETA feature in Kubernetes is typically considered ready to use in production environments but not yet guaranteed to remain unchanged. If the feature does change we will update the operator accordingly. BETA features receive bug fixes and improvements much like a GA feature.  For OpenShift, the feature is available starting from 4.20 (currently in RC phase).

Check if In-Place Pod Resize is Enabled

Run:

kubectl get --raw /metrics  | grep kubernetes_feature_enabled | grep -i \"InPlacePodVerticalScaling\"

If the output ends with 1, the feature is enabled and you can use the new startupResources property.

How It Works

In the IntegrationRuntime Custom Resource (CR), you can now specify CPU resources for the startup phase:

spec:
  startupResources:
    limits:
      cpu: 2000m
    requests:
      cpu: 2000m

This configuration allocates 2000m CPU during container creation. Once ACE processes are initialized, the App Connect Operator automatically scales down the pod to the normal CPU values (either defaults or those specified in spec.template.spec.containers). 

Startup Flow Overview


  1. Initialization
    Go code initializes and starts ACE runtime. Port 7600 opens for admin requests (used by startupProbe).

  2. Pod Ready
    Kubernetes updates ContainersReady status to true.

  3. CPU Resize
    Operator detects ContainersReady and patches the pod to reduce CPU limits.
    The file /sys/fs/cgroup/cpu.max inside the runtime container is updated with the new limit.

  4. ACE Acknowledges
    Runtime container go code detects the new CPU limit and updates pod status CPUResizeComplete to true.

  5. Traffic Routing
    Kubernetes sees both ContainersReady and CPUResizeComplete as true, marks the pod as Ready, and routes traffic.

Important: During startup, the IBM License Service does not count the container as running, so no VPC usage occurs until the pod is fully ready.

Note that if startupResources are not specified then the startup process is exactly the same as it has been previously and is un-affected by these changes. It does not use the CPUResizeComplete status. 

Restrictions

There is one small restriction with the In Place Pod resize capability in that you resize from different value requests / limits during the startup phase to having the same values for normal running. i.e. The following CR would be valid because the before and after are both the same value.

spec:
  startupResources:
    requests:
      cpu: 2000m
    limits:
      cpu: 2000m
  template:
    spec:
      containers:
        - name: runtime
          resources:
            limits:
              cpu: 300m
            requests:
              cpu: 300m

The following CR is also valud. In this scenario the user wanted to have different values because they wanted to handle the worse case scenario where the whole cluster is restarting but also allow extra overhead of its just a single pod restarting. To avoid this limitation the user set the running values fractionally different but not enough to affect the function.
 
spec:
  startupResources:
    requests:
      cpu: 500m
    limits:
      cpu: 2000m
  template:
    spec:
      containers:
        - name: runtime
          resources:
            limits:
              cpu: 300m
            requests:
              cpu: 299m

The CR below would not be valid because the initial values are not the same but then the normal values are the same 

spec:
  startupResources:
    requests:
      cpu: 500m
    limits:
      cpu: 2000m
  template:
    spec:
      containers:
        - name: runtime
          resources:
            limits:
              cpu: 300m
            requests:
              cpu: 300m

Summary

With App Connect Operator 12.17.0, you can now:

  • Speed up container startup by allocating more CPU temporarily without a restart of the container.
  • Automatically reduce CPU after initialisation to minimize licensing costs.
  • Take advantage of Kubernetes’ In-Place Pod Resize for seamless scaling.
0 comments
84 views

Permalink