Red Hat OpenShift

Red Hat OpenShift

Red Hat OpenShift

Kubernetes-based container platform that provides a trusted environment to run enterprise workloads. It extends the Kubernetes platform with built-in software to enhance app lifecycle development, operations, and security

 View Only

How to Monitor Apache Kafka Clusters in Real-Time on RedHat Openshift for IBM Z & LinuxONE

By Parameshwaran Krishnasamy posted yesterday

  
Authors : Parameshwaran Krishnasamy (Parameshwaran.K@ibm.com), Santosh Vasisht (Santosh.Vasisht@ibm.com), Arya Jena (arya.jena4@ibm.com), Basavaraju G (basavarg@in.ibm.com), Rishika Kedia (rishika.kedia@in.ibm.com)

Picture

Enterprises running mission-critical workloads increasingly rely on Apache Kafka for real-time data streaming and event-driven architectures. However, monitoring and managing large-scale Kafka clusters—especially in cloud-native environments like RedHat OpenShift—can be complex, error-prone, and time-consuming. Challenges such as limited visibility into Kafka internals, lack of centralized monitoring, and the need for specialized operational expertise often hinder performance and scalability. 

To address these challenges, organizations can leverage the RedHat Streams for Apache Kafka Console Operator. This solution provides a unified, intuitive, and real-time interface for monitoring and managing Kafka clusters, enabling improved resilience, scalability, and performance.  

Real-World Challenge 

Managing Apache Kafka clusters in enterprise environments—especially on platforms like RedHat OpenShift—presents several operational challenges: 
  • Monitoring performance across distributed nodes and partitions. 
  • Diagnosing real-time issues such as under-replicated partitions and consumer lag. 
  • Executing scaling operations like adding or removing brokers or partitions. 
  • Maintaining cluster health during traffic spikes or workload surges. 
  • Limited visual observability into Kafka’s internal state and metrics. 
These challenges hinder the ability to ensure high availability, reduce incident resolution time, and optimize Kafka performance at scale. 

Root Cause 

  • Lack of centralized monitoring: Kafka environments often rely on multiple tools and manual setups, limiting full visibility. 
  • High operational complexity: The distributed architecture makes it challenging to track issues across topics, partitions, and consumer groups. 
  • Delayed incident response: Without real-time monitoring, issue detection and resolution tend to be reactive. 
  • Platform silos: Tools not optimized for specific infrastructures can cause performance bottlenecks and inefficiencies.

 

The Need for a Monitoring Tool 

To operate Kafka Clusters effectively, Organizations need: 
  • A centralized and real-time monitoring dashboard. 
  • Visual representation of Kafka architecture and health metrics. 
  • Detailed insights into kafka nodes, topics, consumer groups, and partitions. 
  • Alerts and diagnostic tools to detect and resolve issues quickly. 
  • Seamless integration with IBM Z and LinuxONE environments.  

Using the RedHat Streams for Apache Kafka Console Operator 

The Streams for Apache Kafka Console Operator provides: 

Picture 

Install the Console Operator 

Prerequisites:

1. An OpenShift 4.14 to 4.19 cluster. 

2. The oc command-line tool is installed and configured to connect to the OpenShift cluster. 

3. Access to the OpenShift cluster using an account with cluster-admin permissions, such as system-admin. 

Procedure:

1. Navigate in the OpenShift web console to the Home > Projects page and create a project (namespace) for the installation. 

2. Navigate to the Operators > OperatorHub page. 

3. Scroll or type a keyword into the Filter by keyword box to find the Streams for Apache Kafka Console operator. 

The operator is located in the Streaming & Messaging category. 

 

4. Click Streams for Apache Kafka Console to display the operator information. 

 

5. Read the information about the operator and click Install. 

6. On the Install Operator page, choose from the following installation and update options: 

  1. Update Channel: Choose the update channel for the operator. 
    1. The (default) alpha channel contains all the latest updates and releases, including major, minor, and micro releases, which are assumed to be well tested and stable.
    2. An amq-streams-X.x channel contains the minor and micro release updates for a major release, where X is the major release version number. 
    3. An amq-streams-X.Y.x channel contains the micro release updates for a minor release, where X is the major release version number and Y is the minor release version number. 
  2. Installation Mode: Install the operator to all namespaces in the OpenShift cluster.A single instance of the operator will watch and manage consoles created throughout the OpenShift cluster. 
  3. Update approval: By default, the Streams for Apache Kafka Console operator is automatically upgraded to the latest console version by the Operator Lifecycle Manager (OLM). Optionally, select Manual if you want to manually approve future upgrades.  

 

7. Click Install to install the operator to your selected namespace. 

8. After the operator is ready for use, navigate to Operators > Installed Operators to verify that the operator has installed to the selected namespace. 

The status will show as Succeeded. 

 

 

Deploying Console Instance 

Prerequisite: 

  • An OpenShift 4.14 to 4.19 cluster. 

  • The oc command-line tool is installed and configured to connect to the OpenShift cluster. 

  • Access to the OpenShift cluster using an account with cluster-admin permissions, such as system-admin. 

    • If you use your own Streams for Apache Kafka deployment, verify the configuration by comparing it with the example deployment files provided with the console. 

    • If you already have Streams for Apache Kafka installed but want to create a new Kafka cluster for use with the console, example deployment resources are available to help you get started. 

    • NOTE : We have created few kafka topics already and loaded some messages. Users can also create some sample topics and use the same for validation. 

Procedure: 

  1. Download and extract the console installation artifacts.

          The artifacts are included with installation and example files available from the release page. 

          The artifacts provide the deployment YAML files to the install the Kafka cluster. Use the sample installation files located in examples/console/resources/kafka. 

          NOTE : We have used the sample example resources for metrix monitoring. Users can also use the same and validate. 

  1. Set environment variables to update the installation files: 

               export NAMESPACE=kafka

               export LISTENER_TYPE=route 

               export CLUSTER_DOMAIN=<domain-name>

  1. Install the Kafka cluster. 

          Run the following command to apply the YAML files and deploy the Kafka cluster to the defined namespace: 

                cat examples/console/resources/kafka/*.yaml | envsubst | oc apply -n ${NAMESPACE} -f - 

         This command reads the YAML files, replaces the namespace environment variables, and applies the resulting configuration to the specified OpenShift namespace. 

  1. Check the status of the deployment: 

              oc get pods -n kafka 

          Output shows the operators and cluster readiness 

              NAME                        READY   STATUS   RESTARTS 

              strimzi-cluster-operator          1/1     Running  0 

              console-kafka-console-nodepool-0  1/1     Running  0 

              console-kafka-console-nodepool-1  1/1     Running  0 

              console-kafka-console-nodepool-2  1/1     Running  0 

Deploying and connecting the console to a Kafka cluster    

  1. Create a Console custom resource in the desired namespace. 

       If you deployed the example Kafka cluster using above steps provided with the installation artifacts, you can use the configuration specified in the examples/console/resources/console/010-Console-example.yaml configuration file unchanged. 

          Otherwise, configure the resource to connect to your Kafka cluster. 

          Example console configuration 

              apiVersion: console.streamshub.github.com/v1alpha1 

              kind: Console 

              metadata: 

                name: my-console 

              spec: 

                hostname: my-console.<cluster_domain 

                kafkaClusters: 

                  - name: console-kafka  

                    namespace: kafka  

                    listener: secure  

                    properties: 

                      values: []  

                      valuesFrom: []  

                    credentials: 

                      kafkaUser: 

                        name: console-kafka-user1 

  1. Apply the Console configuration to install the console. 

          In this example, the console is deployed to the console-namespace namespace: 

              oc apply -f examples/console/resources/console/010-Console-example.yaml -n console-namespace

  1. Check the status of the deployment:  

               oc get pods -n console-namespace

          Output shows the deployment name and readiness 

               NAME           READY  STATUS  RUNNING 

               console-kafka  1/1    1       1 

  1. Access the console. 

          When the console is running, use the hostname specified in the Console resource (spec.hostname) to access the user interface. 

Assuming all setup is done and proceeding to Navigate the Streams for Apache Kafka Console. 

Navigating the Streams for Apache Kafka Console 

When we open the Streams for Apache Kafka Console, the homepage displays a list of connected Kafka clusters. Click a cluster name to view its details from the following pages: 

Cluster overview 

Displays high-level information about the Kafka cluster, including its status, key metrics, and resource utilization.  

 

 

Nodes 

Provides details on broker and controller nodes, including their roles, operational status, and partition distribution.  

 

 

Topics 

Lists topics and their configurations, including partition-specific information and connected consumer groups. 

 

 

 

 

 

 

Consumer groups 

Displays consumer group activity, including offsets, lag metrics, and partition assignments. 

 

 

Conclusion 

With the Red Hat Streams for Apache Kafka Console Operator, enterprises can finally simplify Kafka management. It delivers real-time visibility, faster troubleshooting, and seamless scalability - all from a single, intuitive console. The result : Stronger resilience, higher performance, and less operational complexity, empowering teams to focus on innovation instead of infrastructure. 

Ref : 

0 comments
36 views

Permalink