Introduction to Apache Kafka With Spring
Introduction to Apache Kafka with Spring.
Join the DZone community and get the full member experience.Join For Free
Apache Kafka is a community-distributed streaming platform that has three key capabilities: publish and subscribe to streams of records, store streams of records in a fault-tolerant durable way, and then process streams as they occur. Apache Kafka has several successful cases in the Java world. This post will cover how to benefit from this powerful tool in the Spring universe.
Apache Kafka Core Concepts
Kafka is run as a cluster on one or more servers that can span multiple data centers; Kafka cluster stores a stream of records in categories called topics, and each record consists of a key, a value, and a timestamp.
From the documentation, Kafka has four core APIs:
- The Producer API allows an application to publish a stream of records to one or more Kafka topics.
- The Consumer API allows an application to subscribe to one or more topics and process the stream of records produced to them.
- The Streams API allows an application to act as a stream processor, consuming an input stream from one or more topics and producing an output stream to one or more output topics, effectively transforming the input streams to output streams.
- The Connector API allows building and running reusable producers or consumers that connect Kafka topics to existing applications or data systems.
There is also the possibility of using Docker. As it requires two images, one to Zookeeper and one to Apache Kafka, this tutorial will use docker-compose. Follow these instructions:
- Install Docker
- Install Docker-compose
- Create a docker-compose.yml and set it with the configuration below:
Then, run the command:
To connect as localhost, also define Kafka as the localhost within Linux, append the value below at t
Application With Spring
To explore Kafka, we'll use the Spring-kafka project. In the project, we'll simple a name counter, where based on a request it will fire an event to a simple counter in memory.
The Spring for Apache Kafka (spring-kafka) project applies core Spring concepts to the development of Kafka-based messaging solutions. It provides a "template" as a high-level abstraction for sending messages. It also provides support for Message-driven POJOs with @KafkaListener annotations and a "listener container". These libraries promote the use of dependency injection and declarative. In all these cases, you will see similarities to the JMS support.
The first step in a Spring project maven based, where we'll add Spring-Kafka, spring-boot-starter-web.
Spring-kafka by default uses the String to both serializer and deserializer. We'll overwrite this configuration to use JSON where we'll send Java objects through JSON.
KafkaTemplate is a template for executing high-level operations in Apache Kafka. We'll use this class in the name service to fire two events, one to increment and another one to decrement, in the Kafka.
Once we talked about the producer with the
KafkaTemplatethe next step is to define a consumer class. A Consumer class will listen to a Kafka event to execute an operation. In this sample, the NameConsumer will listen to events easily with the
To conclude, we see the potential of Apache Kafka and why this project became so accessible to Big-Data players. This is a simple example of how secure it is to integrate with Spring.
Opinions expressed by DZone contributors are their own.