Containers, Kubernetes, OpenShift on Power

 View Only

Red Hat OpenShift Serverless on IBM Power

By Pratham Murkute posted 5 days ago

  

What is serverless?

Serverless is a deployment model that allows applications to be built and run without requiring an understanding of the infrastructure in which they are deployed. The serverless platform is simple to use and easily accessible to everyone. Developers can concentrate more on software development if they don't have to worry about infrastructure.

Introducing Red Hat OpenShift Serverless

Red Hat OpenShift Serverless streamlines the process of delivering code from development to production by eliminating the need for developers to understand the underlying architecture or manage back-end hardware configurations to run their software.

Serverless computing is a form of cloud computing that eliminates the need to install servers, provision servers, or handle scaling. As a result, monotonous tasks are abstracted away by the platform, which allows developers to push code to production more rapidly than in traditional models. With OpenShift Serverless, designers can deploy applications and container workloads using Kubernetes native APIs and their familiar languages and frameworks.

OpenShift Serverless on OpenShift Container Platform empowers stateless, serverless workloads to run on a single, multicloud container platform with automated operations. Developers can use a single platform for hosting their microservices, legacy, and serverless applications. For more information, see OpenShift Serverless.

OpenShift Serverless is built on the open source Knative project, which enables transferability and stability across hybrid and multicloud environments with an enterprise-grade serverless platform. In OpenShift Serverless version 1.14.0 and later versions, multiple architectures are supported, including IBM Power Little Endian (IBM Power LE). Following are the steps to effectively use OpenShift Serverless on Power LE:

  1. Installing OpenShift Serverless on an IBM Power LE based OpenShift Container Platform
  2. Deploying a sample application
  3. Autoscaling Knative-serving applications
  4. Splitting traffic between revisions of an application

Installing OpenShift Serverless on an IBM Power LE based OpenShift Container Platform

To install and use OpenShift Serverless, the Red Hat OpenShift Container Platform cluster must be sized correctly. OpenShift Serverless requires a cluster with at least 10 CPUs and 40 GB of memory. The total size required to run OpenShift Serverless depends on the applications deployed. By default, each pod requests around 400 millicpu or millicores of CPU. So the minimum requirements are based on this value. A given application can scale up to 10 replicas, and lowering the CPU request of an applications can increase the number of possible replicas. To know more, refer Installing serverless on OpenShift Container Platform.

Deploying a sample application

To deploy a serverless application using OpenShift Serverless, you have to generate a Knative service (which is a Kubernetes service, defined by routes and configurations, and contained in a YAML files).

The following example creates a sample "Hello World" application that can be accessed remotely and demonstrates basic Serverless features.

Example: Deploying the sample application

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: helloworld-go
  namespace: default
spec:
  template:
    spec:
      containers:
        - image: quay.io/multi-arch/knative-samples-helloworld-go:latest
          env:
            - name: TARGET
              value: "knative sample application"

When deployed, Knative creates an immutable revision for this version of the application. In addition, Knative is capable of creating a path, an entry point, a service, and a load balancer for your application and automatically scales your pods based on traffic, including inactive pods. For more information, see Deploying a sample application.

Autoscaling Knative-serving applications

The OpenShift Serverless platform supports automatic pod scaling, including the ability to reduce the number of inactive pods to zero. In order to enable autoscaling for Knative serving, you must construct the concurrency and scale bounds in the revision template by adding the target annotation or the container concurrency field.

Example: Autoscaling YAML

spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/minScale: "2"
        autoscaling.knative.dev/maxScale: "10

The minScale and maxScale annotations can be used to construct the minimum and maximum number of pods that can serve applications. Refer to Autoscaling for more information about it.

Splitting traffic between revisions of an application

With each update to the configuration of a service, a new revision for the service is created. By default, the service path points all traffic to the latest modification. You can change this behaviour by defining the revisions that receive a share of traffic as shown in the following example.

Example: Traffic splitting

spec:
  traffic:
  - latestRevision: false
    percent: 30
    revisionName: sample-00001
    tag: v1
  - latestRevision: true
    percent: 70
    revisionName: sample-00002
    tag: v2

Knative services allow traffic mapping, enabling modifications of services that can assigned to a specific portion of traffic. Traffic mapping also offers the option of creating unique URLs for accessing the services. Read more about Traffic management.

You can manage the traffic between the revisions of service by splitting and routing it to different revisions as required. Refer to the following figure that depicts how the traffic is split between multiple replicas of an application. Figure 1 shows a single replica of the webserver that manages 100% traffic. Figure 2 shows two replicas of the webserver where the traffic is split in the ratio 70:30. For more information, see Traffic splitting.

A single replica of the webserver that manages 100% traffic
Figure 1. A single replica of the webserver that manages 100% traffic

Two replicas of the webserver where the traffic is split in the ratio 70:30
Figure 2. Two replicas of the webserver where the traffic is split in the ratio 70:30

Conclusion

Similarly, OpenShift Serverless supports legacy applications in any cloud on-premises or hyper environment. If OpenShift is installed on the respective infrastructure, legacy apps can be containerized and deployed via serverless. This helps in constructing new products and client experience. The OpenShift Serverless platform provides a simplified developer experience for positioning applications on containers simultaneously, making life easier for operations.

OpenShift Serverless reduces development time from hours to minutes and from minutes to seconds. Regarding support for microservices, developers will get what they want and when they want. Using this platform, enterprises can benefit in terms of agility, rapid development, and enhanced resource exploitation. This eliminates overprovisioning and paying higher costs when resources are idle. In addition, OpenShift Serverless eliminates under-provisioning and revenue losses caused by poor service quality.

Check out the following resources to find information about OpenShift Serverless on IBM Power:

Permalink