Red Hat OpenShift - Group home

Horizontal Pod Autoscaler Controller

By Gerald Hosch posted Tue August 16, 2022 09:17 AM

  
Written by Nishant Chauhan

In the world of Red Hat® OpenShift® Container Platform, the Horizontal Pod Autoscaler (HPA) controller is one of the key components, providing elasticity to the running workload inside Deployment, StatefulSet etc.  HPA can increase and decrease the number of pods based on the resource utilization of the containers. It compares current metrics with the desired CPU and memory metrics. Since HPA is a core component, you do not need to install it separately.

HPA can be configured through the web console and the CLI; the easiest way is this command:

oc autoscale deployment node-example --cpu-percent=70 --min=2 --max=10
This command means: if the CPU utilization is below 70%, the number of pods will be decreased till 2 pods, and, if the CPU utilization is 70% or higher, the number of pods will be increased till 10 pods.

The HPA controller offers two more options to configure metrics, which provides more flexibility on how the workload behaves in a particular situation, i.e., Scaling Policy and Stabilization Window.

  • Scaling Policy: At what frequency pod numbers should increase or decrease
  • Stabilization Window: Helps to keep pods scaling stable if metrices are fluctuating

You can use these options to configure more complex HPA objects by creating a yaml file.
An example of a HPA object utilizes both, Scaling Policy and Stabilization Window, is shown below:

apiVersion: autoscaling/v2beta2
kind: HorizontalPodAutoscaler
metadata:
  name: node-example
  namespace: project-hpa
spec:
  maxReplicas: 50
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: node-example
  metrics:
  - type: Resource
    resource:
      name: memory
      target:
        type: AverageValue
        averageValue: 10Mi
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Pods
        value: 5
        periodSeconds: 1
      selectPolicy: Min
metadata:
  name: node-example
  namespace: project-hpa
spec:
  maxReplicas: 50
  minReplicas: 2
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: node-example
  metrics:
  - type: Resource
    resource:
      name: memory
      target:
        type: AverageValue
        averageValue: 10Mi
  behavior:
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Pods
        value: 5
        periodSeconds: 1
      selectPolicy: Min

In this example:

  • Memory metrics is used to enable HPA with custom values of Scaling Policy with Stabilization Window.
  • In the Spec section the maximum and minimum target of replicas of pod of node-example deployment and selected memory as metrics with specific value of 10Mi are defined. The specification means: whenever the application consumes >= 10Mi, it starts increasing the number of pods until the average value is < 10Mi or the number of pod replicas reaches the target of 50.
  • In behaviour section, the ScaleUp policy is defined which indicates pods will increase at the rate of 5 pods in 1 sec, selectPolicy to Min acknowledge the autoscaler to choose the policy that affects the smallest number of Pods.
  • As ScaleDown policy is not configured, so it will work with default values as shown below

    scaleDown:
    stabilizationWindowSeconds: 300
    policies:
    - type: Percent
      value: 100
      periodSeconds: 15


Example to showcase how HPA works:

1. Create a sample application
apiVersion: apps/v1
kind: Deployment
metadata:
  name: node-example
spec:
  replicas: 1
  selector:
    matchLabels:
      app: node-example
  template:
    metadata:
      labels:
        app: node-example
    spec:
      containers:
      - name: node-example
        image: sys-loz-test-team-docker-local.artifactory.swg-devops.com/hpa-example
        imagePullPolicy: Always
        ports:
        - containerPort: 3000
        resources:
            limits:
              cpu: "0.5"
            requests:
              cpu: "0.25"
---
apiVersion: v1
kind: Service
metadata:
  name: node-example
  labels:
    app: node-example
spec:
  ports:
  - port: 3000
  selector:
    app: node-example

Note: to access above container image you need to create a dockerconfigjson file with your credentials of jfrog repository. If credentials are not available we can have any containerized application, for example from Docker Hub.
oc create secret generic secret-jfrog --from-
file=.dockerconfigjson=</path/to/dockerconfigjson> --
type=kubernetes.io/dockerconfigjson

oc secrets link default secret-jfrog --for=pull

2. Expose deployment

oc expose svc node-example

3. Create a simple HPA object

oc autoscale deployment node-example --cpu-percent=20 --min=2 --max=5

4. Put some load on deployment, keep repeating the command until CPU increases above 70%

ab -c 5 -n 1000 -t 100000 http://<obtain from step no. 2>/

5. Monitor HPA values and Pod Replicas, please open 2 terminals and run below command on each terminal in parallel with step 4
watch oc get hpa node-example
watch oc get pods

 

In the end but not the least

  • Do not use HPA with VPA on CPU and memory as metrics
  • HPA cannot be used with DaemonSets
  • More Information:


BTW, Red Hat OpenShift Container Platform 4.11 is available, see the release notes for
Red Hat OpenShift 4.11 on IBM Z and IBM LinuxONE.

 

0 comments
16 views