Deploying Kafka on OpenShift
Bringing Kafka to the cloud.
Join the DZone community and get the full member experience.Join For Free
This article describes an easy way for developers to deploy Kafka on Red Hat OpenShift.
There are multiple ways to use Kafka in the cloud. One way is to use IBM’s managed Event Streams service or Red Hat’s managed service OpenShift Streams for Apache Kafka. The big advantage of managed services is that you don’t have to worry about managing, operating, and maintaining the messaging systems. As soon as you deploy services in your own clusters, you are usually responsible for managing them. Even if you use operators which help with day 2 tasks, you will have to perform some extra work compared to managed services.
Another approach to use Kafka is to install it in your own clusters. Especially for the early stages in projects when developers want simply to try out things, this is a pragmatic approach to get started. For Kafka multiple operators are available which you find on the OperatorHub page in the OpenShift Console, for example:
- Red Hat Integration – AMQ Streams
Strimzi is the open-source upstream project for Red Hat’s AMQ Streams operator. It’s also the same code base used in Red Hat’s new managed Kafka service.
As always you can install the operators through the OpenShift user interface or programmatically.
For my application modernization example, I’ve used the programmatic approach to set up the Strimzi operator.
In the file kafka-cluster.yaml, the cluster is defined.
After this Kafka will be available under ‘my-cluster-kafka-external-bootstrap.kafka:9094’ for other containers running in the same cluster.
To learn more about OpenShift deployments and application modernization, check out the Application Modernization – From Java EE in 2010 to Cloud-Native in 2021 on GitHub.
Published at DZone with permission of Niklas Heidloff, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.