DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Reactive Kafka With Streaming in Spring Boot
  • Setting Up Local Kafka Container for Spring Boot Application
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Manage Microservices With Docker Compose

Trending

  • Performance Optimization Techniques for Snowflake on AWS
  • Emerging Data Architectures: The Future of Data Management
  • Medallion Architecture: Efficient Batch and Stream Processing Data Pipelines With Azure Databricks and Delta Lake
  • Why High-Performance AI/ML Is Essential in Modern Cybersecurity
  1. DZone
  2. Software Design and Architecture
  3. Containers
  4. Using KRaft Kafka for Development and Kubernetes Deployment

Using KRaft Kafka for Development and Kubernetes Deployment

Simplify Kafka with KRaft—ditch ZooKeeper, streamline configs for Docker and Kubernetes, and integrate easily with Spring Boot for development and deployment.

By 
Sven Loesekann user avatar
Sven Loesekann
·
Mar. 25, 25 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
4.2K Views

Join the DZone community and get the full member experience.

Join For Free

With KRaft for Kafka ZooKeeper is no longer needed. KRaft is a protocol to select a leader among several server instances. That makes the Kafka setup much easier. 

The new configuration is shown on the example of the MovieManager project.

Using Kafka for Development

For development, Kafka can be used with a simple Docker setup. That can be found in the runKafka.sh script:

Shell
 
#!/bin/sh
# network config for KRaft
docker network create app-tier --driver bridge
# Kafka with KRaft
docker run -d \
    -p 9092:9092 \
    --name kafka-server \
    --hostname kafka-server \
    --network app-tier \
    -e KAFKA_CFG_NODE_ID=0 \
    -e KAFKA_CFG_PROCESS_ROLES=controller,broker \
    -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \
    -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=
         CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
    -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-server:9093 \
    -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
    bitnami/kafka:latest
# Start Kafka with KRaft
docker start kafka-server


First, the Docker bridge network app-tier is created to enable the KRaft communication of the Kafka instances among each other. Then, the Kafka instances are started with the docker run command. The port 9092 needs to be exported. After the run command has been executed, the kafka-server configuration can be started and stopped with docker commands.

To run a Spring Boot application with Kafka in development, a kafka profile can be used. That enables application configurations with and without Kafka. An example configuration can look like the application-kafka.properties file:

Properties files
 
kafka.server.name=${KAFKA_SERVER_NAME:kafka-server}
spring.kafka.bootstrap-servers=localhost:9092
spring.kafka.producer.compression-type=gzip
spring.kafka.producer.transaction-id-prefix: tx-
spring.kafka.producer.key-serializer=
  org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=
  org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.enable.idempotence=true
spring.kafka.consumer.group-id=group_id
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=
  org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=
  org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.isolation-level=read_committed
spring.kafka.consumer.transaction-id-prefix: tx-


The important lines are the first two. In the first line, the kafka.server.name is set to kafka-server, and in the second line, the spring.kafka.bootstrap-servers are set to localhost:9092. That instructs the application to connect to the dockerized Kafka instances on localhost.

To connect to Kafka locally, the DefaultHostResolver has to be patched:

Java
 
public class DefaultHostResolver implements HostResolver {
  public static volatile String IP_ADDRESS = "";
  public static volatile String KAFKA_SERVER_NAME = "";
  public static volatile String KAFKA_SERVICE_NAME = "";

  @Override
  public InetAddress[] resolve(String host) throws UnknownHostException {
    if(host.startsWith(KAFKA_SERVER_NAME) && !IP_ADDRESS.isBlank()) {
      InetAddress[] addressArr = new InetAddress[1];
      addressArr[0] = InetAddress.getByAddress(host, 
        InetAddress.getByName(IP_ADDRESS).getAddress());
      return addressArr;
    } else if(host.startsWith(KAFKA_SERVER_NAME) && 
      !KAFKA_SERVICE_NAME.isBlank()) {
      host = KAFKA_SERVICE_NAME;
    }
    return InetAddress.getAllByName(host);
  }
}


The DefaultHostResolver handles the name resolution of the Kafka server hostname. It checks if the KAFKA_SERVER_NAME is set and then sets the IP_ADDRESS for it. That is needed because the local name resolution of kafka-server does not work(unless you put it in the hosts file).

Using Kafka in a Kubernetes Deployment

For a Kubernetes deployment, an updated configuration is needed. The values.yaml changes look like this:

YAML
 
...
kafkaServiceName: kafkaservice
...
secret:
  nameApp: app-env-secret
  nameDb: db-env-secret
  nameKafka: kafka-env-secret

envKafka:
  normal: 
    KAFKA_CFG_NODE_ID: 0
    KAFKA_CFG_PROCESS_ROLES: controller,broker
    KAFKA_CFG_LISTENERS: PLAINTEXT://:9092,CONTROLLER://:9093
    KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP: 
      CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
    KAFKA_CFG_CONTROLLER_QUORUM_VOTERS: 0@kafkaservice:9093
    KAFKA_CFG_CONTROLLER_LISTENER_NAMES: CONTROLLER 


Because ZooKeeper is no longer needed, all of that configuration has been removed. The Kafka configuration is updated to the values KRaft needs.

In the template, the ZooKeeper configuration has been removed. The Kafka deployment configuration has not changed because only the parameters change. The parameters are provided by the values.yaml with the helpers.tpl script. The Kafka service configuration template needs to change to support the KRaft communication between the Kafka instances for leader selection:

YAML
 
apiVersion: v1
kind: Service
metadata:
  name: {{ .Values.kafkaServiceName }}
  labels:
    app: {{ .Values.kafkaServiceName }}
spec:
  ports:
  - name: tcp-client
    port: 9092
    protocol: TCP
  - name: tcp-interbroker
    port: 9093
    protocol: TCP
    targetPort: 9093
  selector:
    app: {{ .Values.kafkaName }}   


This is a normal service configuration that opens the port 9092 internally and works for the Kafka deployment. The tcp-interbroker configuration is for the KRaft leader selection. It opens the port '9093' internally and provides the 'targetPort' to enable sending requests among each other.

The application can now be run with the profiles prod and kafka and will start with the application-prod-kafka.properties configuration:

Properties files
 
kafka.server.name=${KAFKA_SERVER_NAME:kafkaapp}
spring.kafka.bootstrap-servers=${KAFKA_SERVICE_NAME}:9092
spring.kafka.producer.compression-type=gzip
spring.kafka.producer.transaction-id-prefix: tx-
spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
spring.kafka.consumer.group-id=group_id
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.isolation-level=read_committed
spring.kafka.consumer.transaction-id-prefix: tx-


The deployment configuration is very similar to the development configuration. The main difference is in the first two lines. The application will not start if the environment variable KAFKA_SERVICE_NAME is not set. That would be an error in the deployment configuration of the MovieManager application that has to be fixed.

Conclusion

With KRaft, Kafka no longer needs ZooKeeper, making the configuration simpler. For development, two Docker commands and a simple Spring Boot configuration are enough to start Kafka instances to develop against. For deployment in Kubernetes, the configuration of the Docker commands for the development setup can be used to create Kafka instances, and a second port configuration is needed for KRaft.

The time of Kafka being more difficult than other messaging solutions is over. It is now easy to develop and easy to deploy. Kafka should now be used for all the use cases where it fits the requirements.

Kubernetes Docker (software) kafka Spring Boot

Opinions expressed by DZone contributors are their own.

Related

  • Reactive Kafka With Streaming in Spring Boot
  • Setting Up Local Kafka Container for Spring Boot Application
  • Auto-Scaling a Spring Boot Native App With Nomad
  • Manage Microservices With Docker Compose

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!