{{announcement.body}}
{{announcement.title}}

Accessing Apache Kafka From Quarkus

DZone 's Guide to

Accessing Apache Kafka From Quarkus

This article describes how to develop microservices with Quarkus which use Apache Kafka running in a Kubernetes cluster.

· Big Data Zone ·
Free Resource

This article describes how to develop microservices with Quarkus which use Apache Kafka running in a Kubernetes cluster.

Quarkus supports MicroProfile Reactive Messaging to interact with Apache Kafka. There is a nice guide Using Apache Kafka with reactive Messaging which explains how to send and receive messages to and from Kafka.

The guide contains instructions on how to run Kafka locally via Docker with docker-compose. While this is probably the easiest way to get started, the next step for microservices developers is often to figure out how to access Apache Kafka in Kubernetes environments or hosted Kafka services.

Option 1: Apache Kafka Running in Kubernetes

The open-source project Strimzi provides container images and operators for running Apache Kafka on Kubernetes and Red Hat OpenShift. There is a nice blog series on Red Hat Developer that describes how to use Strimzi. To access it from applications there are several different options, for example, NodePorts, OpenShift routes, load balancers, and Ingress.

Sometimes these options can be overwhelming for developers, especially when all you need is a simple development environment to write some reactive hello world applications to get started. In my case, I wanted to install a simple Kafka server in my Minikube cluster.

There is a quick start guide on how to deploy Strimzi to Minikube. Unfortunately, it doesn’t explain how to access it from applications.

Based on these two articles I wrote a simple script that deploys Kafka to Minikube in less than 5 minutes. The script is part of the cloud-native-starter project. Run these commands to give it a try:

Java
 




xxxxxxxxxx
1
17
9


 
1
$ git clone https://github.com/IBM/cloud-native-starter.git
2
$ cd cloud-native-starter/reactive
3
$ sh scripts/start-minikube.sh
4
$ sh scripts/deploy-kafka.sh
5
$ sh scripts/show-urls.sh



The output of the last command prints out the URL of the Kafka bootstrap server which you’ll need in the next step. You can find all resources in the ‘Kafka’ namespace.

To access Kafka from Quarkus, the Kafka connector has to be configured. When running the Quarkus application in the same Kubernetes cluster as Kafka, use the following configuration in ‘application.properties’. ‘my-cluster-Kafka-external-bootstrap’ is the service name, ‘Kafka’ the namespace and ‘9094’ the port.

Java
 




xxxxxxxxxx
1


1
kafka.bootstrap.servers=my-cluster-kafka-external-bootstrap.kafka:9094
2
mp.messaging.incoming.new-article-created.connector=smallrye-kafka
3
mp.messaging.incoming.new-article-created.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer



When developing the Quarkus application locally, Kafka in Minikube is accessed via NodePort. In this case, replace the kafka.bootstrap.servers configuration with the following URL:

Java
 




xxxxxxxxxx
1


 
1
$ minikubeip=$(minikube ip)
2
$ nodeport=$(kubectl get svc my-cluster-kafka-external-bootstrap -n kafka --ignore-not-found --output 'jsonpath={.spec.ports[*].nodePort}')
3
$ echo ${minikubeip}:${nodeport}



Option 2: Kafka as a Service

Most cloud providers also host managed Kafka services. For example, Event Streams is the managed Kafka service in the IBM Cloud. There is a free lite plan which offers access to a single partition in a multi-tenant Event Streams cluster. All you need is an IBM id, which is free and you don’t need a credit card.

As most Kafka services in production, Event Streams require a secure connection. This additional configuration needs to be defined in ‘application.properties’ again.

Java
 




xxxxxxxxxx
1
13
9


1
kafka.bootstrap.servers=broker-0-YOUR-ID.kafka.svc01.us-south.eventstreams.cloud.ibm.com:9093,broker-4-YOUR-ID.kafka.svc01.us-south.eventstreams.cloud.ibm.com:9093,...MORE-SERVERS
2
mp.messaging.incoming.new-article-created.connector=smallrye-kafka
3
mp.messaging.incoming.new-article-created.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
4
mp.messaging.incoming.new-article-created.sasl.mechanism=PLAIN
5
mp.messaging.incoming.new-article-created.security.protocol=SASL_SSL
6
mp.messaging.incoming.new-article-created.ssl.protocol=TLSv1.2
7
mp.messaging.incoming.new-article-created.sasl.jaas.config=org.apache.kafka.common.security.plain.PlainLoginModule required username="token" password="YOUR-PASSWORD";



To enter this information, you need 1. the list of Kafka bootstrap servers and 2. your password for the Event streams service. You can receive this information on the web frontend of the Event Streams service or you can use the IBM Cloud CLI.

My colleague Harald Uebele has developed a script that creates the service programmatically and returns these two pieces of information.

Next Steps

The scripts mentioned in this article are part of the cloud-native-starter project which describes how to get develop reactive applications with Quarkus. My previous article describes the project.

Try out the code yourself.

Topics:
apache, apache kafka, big data, kafka, kubernetes, tutorial

Published at DZone with permission of Niklas Heidloff , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}