Integrating Quarkus With Apicurio Service Registry
Step-by-step tutorial to develop a simple microservice based in Quarkus integrated with an Apicurio Service registry as first step of an Event-Driven Architecture (EDA).
Join the DZone community and get the full member experience.Join For Free
Most new cloud native applications and microservices designs are based in event-driven architecture (EDA) to respond to real-time information based on sending and receiving information about individual events. This kind of architecture is based on asynchronous non-blocking communication between event producers and consumers through an event streaming backbone, such as Apache Kafka running on top of Kubernetes. In these scenarios, where a large number of different events are managed, it is very important to define a governance model where each event could be defined as an API to allow producers and consumers to produce and consume checked and validated events. A Service Registry will help us.
From my field experience with many projects, I've found that the most typical landscape is based on the following well-know components:
- Strimzi to deploy Apache Kafka clusters as a streaming backbone.
- Apicurio Service Registry as a datastore for an events API.
- OpenShift Container Platform to deploy and run the different components.
- Quarkus as a framework to develop client applications.
- Avro as a data serialization system to declare schemas as an events API.
This article describes how easy it is to integrate your Quarkus applications with Apicurio Service Registry.
Apicurio Service Registry
Service Registry is a datastore for sharing standard event schemas and API designs across API and event-driven architectures. Service Registry decouples the structure of your data from your client applications, so you can share and manage your data types and API descriptions at runtime. Decoupling your data structure from your client applications reduces costs by decreasing overall message size and creates efficiencies by increasing consistent reuse of schemas and API designs across your organization.
Some of the most common uses cases where Service Registry helps us are:
- Client applications can dynamically push or pull the latest schema updates to or from Service Registry at runtime without needing to redeploy.
- Developer teams can query the registry for existing schemas required for services already deployed in production.
- Developer teams can register new schemas required for new services in development or rolling to production.
- Store schemas used to serialize and deserialize messages, which can then be referenced from your client applications to ensure that the messages that they send and receive are compatible with those schemas.
Apicurio is an open source project that provides a Service Registry ready to be involved in this scenario with the following main features:
- Support for multiple payload formats for standard event schemas and API specifications.
- Pluggable storage options including AMQ Streams, embedded Infinispan, or a PostgreSQL database.
- Registry content management using a web console, REST API command, Maven plug-in, or Java client.
- Rules for content validation and version compatibility to govern how registry content evolves over time.
- Full Apache Kafka schema registry support, including integration with Kafka Connect for external systems.
- Client serializer/deserializer (Serdes) to validate Kafka and other message types at runtime.
- Cloud-native Quarkus Java runtime for low memory footprint and fast deployment times.
- Compatibility with existing Confluent schema registry client applications.
- Operator-based installation of Service Registry on OpenShift.
Client Application Workflow
The typical workflow when we introduce a Service Registry in our architecture is:
- Declare the event schema using some of the best data formats like Apache Avro, JSON Schema, Google Protocol Buffers, OpenAPI, AsyncAPI, GraphQL, Kafka Connect schemas, WSDL or XML schemas (xsd).
- Register the schema as an artifact in the Service Registry through the Service Registry UI, REST API, Maven Plugin, or Java clients. From there, client applications can use that schema to validate that messages conform to the correct data structure at runtime.
- Kafka Producer applications use a serializer to encode messages that conform to a specific event schema.
- Kafka Consumer applications then use a deserializer to validate that messages have been serialized using the correct schema, based on a specific schema ID.
This workflow ensures consistent schema use and helps to prevent data errors at runtime.
Avro Schemas Into Service Registry
Avro provides a JSON schema specification to declare a large variety of data structures, such as our simple example:
This schema will define a simple message event.
Avro also provides a Maven Plugin to autogenerate Java classes based in the schema definitions (
Now we could publish it to Service Registry to be used at runtime for our client applications. The Apicurio Maven Plugin is an easy way to publish the schemas to Service Registry with a simple definition in our
Using the Apicurio Maven Plugin in our Maven Lifecycle application could help us define or extend our ALM (also our CI/CD pipelines) to publish or update the schemas every time we released new versions of them. This is not an objective of this article, but something that you could analyze more.
As soon we published our schema into the Service Registry we could manage it from the UI.
Quarkus, Apache Kafka, and Service Registry
Quarkus provides a set of dependencies to allow our application to produce and consume messages to and from Apache Kafka. It is very straight forward to use once we add the dependency to our
Connecting your application with the Apache Kafka cluster is so easy with the following property in your
Apicurio Service Registry provides Kafka clients with a serializer/deserializer for Kafka producer and consumer applications. To add them into our application, you must add the following dependency:
Producing Messages From Quarkus
Quarkus provides a set of properties and beans to declare that Kafka Producers send messages (in our case Avro-schema instances) to Apache Kafka. The most important properties to set up are:
key.serializer: Identifies the serializer class to serialize the key of the Kafka record.
value.serializer: Identifies the serializer class to serialize the value of the Kafka record.
Here we have to add some specific values in these properties to allow for the serialization process using Avro schemas registered in the Service Registry. Basically we need to identify the following concepts:
- The serializer class that will use Avro schemas – this class is provided by the Apicurio SerDe class:
- The Apicurio Service Registry endpoint to validate schemas:
- The Apicurio Service strategy to look up the schema definition:
So a sample definition for a producer bean to send messages could be similar to:
And you could finally send Messages (storing the artifact ID from the Service Registry) to Apache Kafka:
The message will be serialized by adding the global ID associated to the schema used for this record. That global ID will be very important when Kafka Consumer applications can later consume the message.
NOTE: This approach uses the KafkaProducer API, however Quarkus includes the
Emitter class to send messages easily. I developed my sample with the first approach to check that the Kafka API is still valid using Quarkus.
Consuming Messages From Quarkus
Quarkus also provides a set of properties and beans to declare Kafka Consumers can consume messages (in our case Avro-schema instances) from the Apache Kafka cluster. The most important properties to set up are:
key.deserializer: Identifies the deserializer class to deserialize the key of the Kafka record.
value.deserializer: Identifies the deserializer class to deserialize the value of the Kafka record.
Here we have to add some specific values in these properties to allow for the deserialization process using Avro schemas registered in the Service Registry. Basically, we need to identify the following concepts:
- The deserializer class for the Avro schemas – this class is provided by the Apicurio SerDe class:
- The Apicurio Service Registry endpoint from which to get valid schemas:
So a sample configuration for a consumer template could be similar to:
You could declare a listener to consume messages (based in our Message schema) as:
The schema is retrieved by the deserializer using the global ID written into the message being consumed from the Service Registry.
Your Quarkus applications integrate easily with Service Registry and Apache Kafka to build your event-driven architecture. Thanks to all these components you get the following benefits:
- Ensure consistent schema use between your client applications.
- Help to prevent data errors at runtime.
- Define a governance model in your data schemas (versions, rules, validations).
- Easily integrate with client applications and components.
You could invest and analyze more to adapt or build your event-drive architecture with these components in the following reference links:
- Getting started with Service Registry
- First look at the new Apicurio Registry UI and Operator
- How to Use Kafka, Schema Registry and Avro with Quarkus
- Using Avro in a native executable
Show Me the Code
Everything seems great and cool, but if you want to see how it really works, then this GitHub repository is your reference.
Enjoying API eventing!
Quarkus 1.10.5.Final includes the native compilation of the Avro schemas capability. This command will compile my sample application natively:
But this is another story for another post!
Published at DZone with permission of Roman Martin Gil. See the original article here.
Opinions expressed by DZone contributors are their own.