Kubernetes

 View Only

Simplify Kafka Management with Strimzi on Kubernetes

By Archana Chinnaiah posted Thu October 19, 2023 12:36 AM

  

Apache Kafka has become the go-to streaming platform for real-time data processing in modern architectures. Its ability to handle massive amounts of data, provide fault-tolerance, and ensure high throughput has made Kafka a popular choice among developers. However, managing and scaling Kafka clusters can be complex and time-consuming. That's where Strimzi comes in. Strimzi is an open-source project that simplifies Apache Kafka management on Kubernetes, making it easier to deploy, scale, and monitor Kafka clusters efficiently.

Introduction :

Strimzi is a CNCF (Cloud Native Computing Foundation) sandbox project that provides a way to run Apache Kafka on Kubernetes in an optimized and scalable manner. It leverages Kubernetes operators to automate the deployment, configuration, and monitoring of Kafka clusters.

Key Features

Easy Installation

Strimzi provides a simple way to install Kafka on Kubernetes. It comes with a set of Kubernetes manifests that can be easily deployed using standard Kubernetes tools like kubectl. Once deployed, the Kafka clusters are automatically managed by Strimzi operators.

Scalability and Elasticity

With Strimzi, scaling Kafka clusters becomes hassle-free. It allows you to dynamically scale your Kafka clusters up or down based on the workload. Being built on Kubernetes, it also leverages Kubernetes' scaling capabilities, enabling you to handle even the highest loads without any disruptions.

Integration with Kubernetes Ecosystem

Strimzi seamlessly integrates with various Kubernetes tools and features. It supports integration with Kubernetes' Service Discovery for discovering Kafka brokers, Kubernetes' Storage Classes for dynamic provisioning of persistent volumes, and Kubernetes' Ingress Controllers for easy access to Kafka from outside the cluster.

Monitoring and Alerting

Strimzi provides built-in monitoring and alerting capabilities. It exposes Kafka metrics via Prometheus, enabling you to easily monitor the health and performance of your Kafka clusters. It also integrates with popular alerting tools like Grafana and Alertmanager, allowing you to set up alerts and notifications for any critical events or anomalies.

Enhanced Security

Security is a critical aspect of any data infrastructure. Strimzi provides support for authentication and authorization mechanisms in Kafka clusters running on Kubernetes. It allows you to configure secure communication between Kafka components using SSL/TLS encryption and enables integration with external authentication systems like OAuth2 and LDAP.

To start with ensure that cluster is deployed in Kubernetes environment and Strimzi operator is installed. Make sure that topic is created.

Configuring and accessing Strimzi

When deploying Strimzi on Kubernetes, you can access Kafka both internally within the Kubernetes cluster and externally from outside the cluster. Here's how you can access Strimzi internally and externally:

External Access

To access Kafka externally from outside the Kubernetes cluster, you have a few options based on your network setup. Configure an external listener for Strimzi Kafka cluster to make it accessible to clients beyond the Kubernetes environment. Define the connection type to be used in the configuration of the external listener. Connection type can be Loadbalancer. NodePort, Route or Ingress. In this article we will configure cluster listener with “route”.

Step 1 :

Modify cluster yaml file to have an external lister with type route. Route will use port 9094. For accessing via routes tls is mandatory, and so why its set to true. Save the file once you make the below changes.

Step 2 :

Once configuration is saved , check the result from “kubectl get svc”, you should be able to see service with NAME *bootstrap.

Step 3:

The next step is to get the bootstrap server address. Execute “kubectl get kafka CLUSTER_NAME -o=yaml” and check “-status” for the listener status. Replace CLUSTER_NAME with actual cluster name. We will be able to see listener name, followed by other configurations including the certificate.

Step 4:

Execute the below commands to get ca certificates and store it local device. Replace $CLUSTER_NAME with actual cluster name.

Step 5:

Next step is to add the certificates to java trust store using the below commands.

Check whether certificate got added to trust store. It should list the certificate that we added with name of alias.

Step 6:

Get bootstrap ip address by executing the following command. Replace <LISTERNAME> with actual listername, and CLUSTER_NAME with actual cluster name.

Result of the above command from cluster name my-cluster and lister name external1 is shown below. Notice that the port number is 443 to access it externally.

Step 7:

To test outside cluster. Download Kafka package, extract it. Create a file (strm.properties) in local drive with below mentioned content. Replace BOOTSTRAT_IP, TRUST_STORE_LOCATION with actual values.

Step 8:

Execute the below command in command line by replacing BOOTSTRAP_IP and TOPIC_NAME to test producer and consumer.

Producer

Consumer

Results of producing and consuming messages

Internal Access

To access Kafka internally within the Kubernetes cluster, you typically use a Kafka image or tool running within the same Kubernetes namespace as the Kafka cluster. The cluster is configured to authentication and authorization (this configuration does not require certificates). Here are the steps to access Strimzi internally:

Step 1 :

Modify cluster yaml file to have a lister with type internal. Internal access will use port 9092 and have enabled scram-sha-512 authentication. Authorization is set to simple. Save the file once you make the below changes.

Step 2 :

Create Kafka user with authentication type scram-sha-512.

Step 3:

Once the user is created successfully, a secret will be created with username(scramuser) containing password and sasl.jaas.config. Make note of sasl.jaas.config.

Step 4:

Get bootstrap ip address by executing the following command. Replace <LISTERNAME> with actual listername, and CLUSTER_NAME with actual cluster name.

Result of the above command from cluster name my-cluster and lister name external1 is shown below. Notice that the port number is 9092 to access it internally.


Step 5:

To test within cluster, create a file (cu.properties) in local drive with below mentioned content. Replace BOOTSTRAP_IP, TRUST_STORE_LOCATION with actual values.

Step 6:

Create a pod by specifying name <PRODUCER_POD_NAME> using kafka image.

Step 7:

Copy file create in Step 5 to the pod using the below command.

Step 8:

Create a pod by specifying name <CONSUMER_POD_NAME> using kafka image.

Step 9:

Copy file create in Step 5 to the pod using the below command.

Step 10:

Execute the below command in the pods created above by replacing BOOTSTRAP_IP and TOPIC_NAME to test producer and consumer.

Producer

Consumer

Results of producing and consuming messages

Wrap Up

In this blog post we learnt on accessing Kafka through Strimzi from internal and external cluster. Java and Python clients can be use to test with the above configuration, for accessing cluster internally and externally. Hope you enjoyed the read !.

Reference - Strimzi Documentation

Acknowledgement - Sandeep Patil, Ramakrishna Vadla, Ranjith Rajagopalan Nair

Permalink