Power Modernization

Power Modernization

Learn about the robust capabilities of IBM Power systems - alongside Red Hat technologies - for modernizing your apps and operations without the need to overhaul existing hardware, offering a flexible and incremental path to innovation.


#Power

 View Only

Red Hat OpenShift Serverless with Apache Kafka

By Kumar Abhishek posted 3 days ago

  

Red Hat OpenShift Serverless is built on Knative (an open-source project) to enable a serverless platform for deploying and managing applications. Red Hat OpenShift Serverless can be integrated with Apache Kafka by leveraging Knative for serverless application development and deployment for event-driven architectures. Using a Knative broker for Apache Kafka, you can use Kafka for eventing and routing.

This blog explains how to create a simple serverless application as a sample and configure a Kafka cluster to send across messages to the application. This sample application was deployed using Red Hat OpenShift Container Platform 4.18.9 on IBM Power architecture with OpenShift Serverless 1.35.1 and Streams for Apache Kafka 2.9.0 operators installed on it.

Introduction to Knative

Knative is an open-source project that helps to deploy and manage modern serverless workloads on Kubernetes. Red Hat OpenShift Serverless is an enterprise-grade serverless offering based on Knative that provides developers with a complete set of tools to build, deploy, and manage serverless applications on OpenShift.

It has three primary components, which are as follows:

  • Serving: It enables rapid deployment and automatic scaling of containers through a request-driven model for serving workloads based on demand.
  • Eventing: It is an infrastructure for consuming and producing events to stimulate applications. An application can be triggered from the event source of the application itself or from cloud services or from a Red Hat AMQ stream.
  • Functions: It is a flexible approach for building source code into containers.

Steps to deploy a Serverless Kafka application on OpenShift Container Platform

Now that you're familiar with the core components of OpenShift Serverless—Serving, Eventing, and Functions—let’s walk through a practical example. In this section, we will demonstrate how to build a simple serverless application that reacts to events from an Apache Kafka cluster.

Figure 1 illustrates the architecture and flow of the application, showing how messages from a producer are routed through Kafka and processed by a Knative service.

Application flow

Figure 1: Application flow

Step 1: Create a namespace for a sample application.

Use the following commands to create a namespace for the application:

oc apply -f - << EOD
apiVersion: v1
kind: Namespace
metadata:
  name: serverless-demo
EOD

Step 2: Install Streams for Apache Kafka and OpenShift Serverless operators.

  1. Open the Red Hat OpenShift console.
  2. Click Operators > OperatorHub. On the All Items page, type Streams for Apache Kafka in the search field and follow the onscreen instructions to install the application.
    OperatorHub

    Figure 2: OperatorHub

  3. Follow the instructions provided in the previous step to install Red Hat Serverless.

    OperatorHub

    Figure 3: OperatorHub

Step 3: Set up a Kafka cluster in the serverless-demo namespace.

Use the following steps to set up a Kafka cluster:

  1. Create a Kafka cluster

    Create a Kafka cluster, named my-cluster, in the serverless-demo namespace with three nodes, three Zookeeper replicas, and an ephemeral storage type using the following commands:

    oc apply -f - << EOD
    kind: Kafka
    apiVersion: kafka.strimzi.io/v1beta2
    metadata:
      name: my-cluster
      namespace: serverless-demo
    spec:
      kafka:
        version: 3.9.0
        replicas: 3
        listeners:
          - name: plain
            port: 9092
            type: internal
            tls: false
          - name: tls
            port: 9093
            type: internal
            tls: true
        config:
          offsets.topic.replication.factor: 3
          transaction.state.log.replication.factor: 3
          transaction.state.log.min.isr: 2
          default.replication.factor: 3
          min.insync.replicas: 2
          inter.broker.protocol.version: '3.9'
        storage:
          type: ephemeral
      zookeeper:
        replicas: 3
        storage:
          type: ephemeral
      entityOperator:
        topicOperator: {}
        userOperator: {}
    EOD
  2. Create a Kafka topic, named my-topic, on my-cluster.

    Run the following commands to create a Kafka topic, called my-topic, on my-cluster:

    oc apply -f - << EOD
    apiVersion: kafka.strimzi.io/v1beta2
    kind: KafkaTopic
    metadata:
      name: my-topic
      generation: 1
      namespace: serverless-demo
      labels:
        strimzi.io/cluster: my-cluster
    spec:
      config:
        retention.ms: 604800000
        segment.bytes: 1073741824
      partitions: 10
      replicas: 1
    EOD
  3. Create a Kafka Bridge connector.

    Create the Kafka Bridge, named my-bridge, to allow the application to interact with the Kafka cluster using the following commands:

    oc apply -f - << EOD
    kind: KafkaBridge
    apiVersion: kafka.strimzi.io/v1beta2
    metadata:
      name: my-bridge
      namespace: serverless-demo
    spec:
      replicas: 1
      bootstrapServers: 'my-cluster-kafka-bootstrap:9092'
      http:
        port: 8080
    EOD

    Deploying the Kafka Bridge also creates a service named my-bridge-bridge-service.

    OperatorHub

    Figure 4: my-bridge-bridge-service

Step 4: Create Knative Serving, Knative Eventing, and Knative Kafka instances

Create Knative Serving, Knative Eventing, and Knative Kafka instances using the following commands. Use bootstrapServers from the Kafka cluster for creating these instances.

oc apply -f - << EOD
apiVersion: operator.knative.dev/v1beta1
kind: KnativeServing
metadata:
  name: knative-serving
  namespace: knative-serving
---
apiVersion: operator.knative.dev/v1beta1
kind: KnativeEventing
metadata:
  name: knative-eventing
  namespace: knative-eventing
---
apiVersion: operator.serverless.openshift.io/v1alpha1
kind: KnativeKafka
metadata:
  name: knative-kafka
  namespace: knative-eventing
spec:
  broker:
    defaultConfig:
      authSecretName: ''
      bootstrapServers: 'my-cluster-kafka-bootstrap.serverless-demo.svc:9092'
      numPartitions: 10
      replicationFactor: 3
    enabled: true
  channel:
    authSecretName: ''
    authSecretNamespace: ''
    bootstrapServers: 'my-cluster-kafka-bootstrap.serverless-demo.svc:9092'
    enabled: true
  high-availability:
    replicas: 1
  logging:
    level: INFO
  sink:
    enabled: true
  source:
    enabled: true
EOD

Step 5: Verify the Knative instances.

Run the following commands to verify Knative instances. All the component pods should either be in Running or Completed status of Knative Serving and Eventing.

oc get po -n knative-serving
oc get po -n knative-eventing
knative serving and knative eventing

Figure 5: knative-serving and knative-eventing

Step 6: Create the Knative service.

Create a Knative service with Knative Serving for your business logic that you want to run based on the event generated by Kafka.

In the sample application, we created an application event-display in the serverless-demo project using a multi-arched image gcr.io/knative-releases/knative.dev/eventing/cmd/event_display. The application has been configured to scale up the number of pods from 0 to 10 based on incoming traffic and to handle one request at a time.

oc apply -f - << EOD
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: event-display
  namespace: serverless-demo
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/target: "1"
        autoscaling.knative.dev/minScale: "0"
        autoscaling.knative.dev/maxScale: "10"
    spec:
      containers:
        - image: gcr.io/knative-releases/knative.dev/eventing/cmd/event_display
EOD

Step 7: Verify the Knative Serving instances.

Applying the application service configuration given in the previous step creates route, service, revision, and other configuration as part of Knative Serving. Verify these Knative Serving components using the following command:

oc get serving -n serverless-demo
knative serving instance

Figure 6: Knative Serving instance

Step 8: Create a Knative Source.

Create an instance named kafka-source in the serverless-demo namespace. It listens to messages from the Kafka Topic agenda and allows Knative Eventing to react to those messages, triggering the Knative Serving service/application as a result.

oc apply -f - << EOD
apiVersion: sources.knative.dev/v1beta1
kind: KafkaSource
metadata:
  name: kafka-source
  namespace: serverless-demo
  labels:
    app: kafka-source
    app.kubernetes.io/instance: kafka-source
    app.kubernetes.io/component: kafka-source
    app.kubernetes.io/name: kafka-source
    app.kubernetes.io/part-of: strimzi-my-bridge
  annotations:
    openshift.io/generated-by: OpenShiftWebConsole
spec:
  bootstrapServers: 
    - my-cluster-kafka-bootstrap.serverless-demo.svc:9092
  topics:
    - my-topic
  consumerGroup: my-kafka-group
  net:
    sasl:
      user: {}
      password: {}
    tls:
      caCert: {}
      cert: {}
      key: {}
  sink:
    ref:
      apiVersion: serving.knative.dev/v1
      kind: Service
      name: event-display
    uri: ''
EOD

Step 9: Send a message to the application.

Use the following command to send a message to the expose my-bridge-bridge-service Kafka Bridge service on the terminal.

oc expose svc my-bridge-bridge-service -n serverless-demo

Extract the Kafka Bridge route to send a message. Initialize the Route variable for the producer script. The producer script uses this route to send the Hello World!! message every 0.5 seconds across the Kafka cluster to the consumer application in the Kafka Serving instance.

ROUTE=$(oc get route my-bridge-bridge-service -n serverless-demo -ojson | jq '"\(.spec.host)"' | tr -d '"')

while :; 
curl -X POST $ROUTE/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{"records": [{"value": "'"$i"' Hello World!!"}]}'; 
echo $i;
do sleep 0.5;
((i=i+1)); 
done

Visualize the earlier sample application

The Topology of serverless-demo can be visualized before sending messages.

serverless-demo

Figure 7: serverless-demo

As shown in Figure 7, note that there are no pods of the event-display application. Pods will scale up when the script starts running.

serverless-demo

Figure 8: serverless demo

Monitor the logs generated by the producer script and observe one of the pods processing a CloudEvent message.

CloudEvent message
CloudEvent message
CloudEvent message

Figure 9: CloudEvent message

Conclusion

In this blog, we explained how Red Hat OpenShift Serverless, built on a Knative project, can be effectively integrated with Apache Kafka to build scalable, event-driven applications. By leveraging Knative Serving and Knative Eventing along with Kafka sources and bridges, developers can create responsive serverless applications that react to real-time events with minimal infrastructure overhead.

The sample application deployed on IBM Power architecture using OpenShift Container Platform, highlights the simplicity and power of combining OpenShift Serverless with Streams for Apache Kafka. It illustrates how serverless applications can dynamically scale based on incoming Kafka messages, enabling efficient resource utilization and seamless event processing.

As organizations move toward cloud-native and event-driven architectures, this integration provides a robust foundation for building modern, reactive systems with enterprise-grade scalability and reliability.

0 comments
6 views

Permalink