When I say to someone that I work for Red Hat, they say me, “Ah! Are you working on Linux ?”
No, no, no, and… no ! I’m not a Linux guy. I’m not a fanboy, I’m just a daily user.
A lot of people know Red Hat through our enterprise Linux distribution, known as Red Hat Enterprise Linux (RHEL), but Red Hat is not only Linux today. Its portfolio is huge: the cloud and containers with the OpenShift effort, microservices with Vert.x, Wildfly Swarm, Spring Boot, and the IoT world with our involvement in Eclipse Foundation projects.
The objective of this blog is to showcase and talk about the projects I've worked (or I’m working) on since last year, when I was hired on March 1. They are not “my” projects, they are projects I’m involved in. The entire team is working on them. Collaboration. You know.
And this might surprise you, but ... there is no Linux! I’m on the messaging and IoT team, so you will see only projects about this stuff
AMQP: Apache Spark Connector
Because the messaging team works mainly on projects like ActiveMQ Artemis and the Qpid Dispatch Router, where the main protocol is AMQP 1.0, the idea was to develop a connector for Spark Streaming in order to ingest data through this protocol — so from queues/topics on a broker or through the router in a direct messaging fashion.
You can find the component here and even an IoT demo here, which shows how it’s possible to ingest data through AMQP 1.0 using the EnMasse project (see below) and then execute real-time streaming analytics with Spark Streaming, all running on Kubernetes and OpenShift.
AMQP: Apache Kafka Bridge
Apache Kafka is one of the best technologies used today for ingesting data (i.e. IoT-related scenarios) with high throughput. Even in this case, the idea was to provide a way to have AMQP 1.0 clients and JMS clients pushing messages to Apache Kafka topics without knowing the related custom protocol.
In this way, if you have such clients, because you are already using a broker technology, but you need some specific Kafka features (i.e. re-reading streams), you can just switch the messaging system (from the broker to Kafka) and, using the bridge, you don’t need to update or modify clients. I showed how this is possible at the Red Hat Summit as well, and the related demo is available here.
MQTT on EnMasse
EnMasse is an open source messaging platform with a focus on scalability and performance. It can run on your own infrastructure (on premise) or in the cloud and simplifies the deployment of messaging infrastructures.
It’s based on other open source projects like ActiveMQ Artemis and Qpid Dispatch Router, supporting the AMQP 1.0 protocol natively.
In order to provide support for the MQTT protocol, we designed a way to take “MQTT over AMQP” and thus allowing MQTT features on the AMQP protocol. From the design, we moved to develop two main components:
- The MQTT gateway, which handles connections with remote MQTT clients, translating all messages from MQTT to AMQP and vice versa.
- The MQTT LWT (Last and Will Testament) service, which provides a way to notify all clients connected to EnMasse that another client has suddenly died sending them its “will message.” The great thing about this service is that it works with pure AMQP 1.0 clients, so it brings the LWT feature on AMQP as well. For this reason, the team is thinking of changing its name to the AMQP LWT service.
EnMasse is great for IoT scenarios to handle a huge number of connections and ingesting a lot of data using the AMQP and MQTT protocols. I used it in all my IoT demos to show how it’s possible to integrate it with streaming and analytics frameworks. It’s also the main choice as a messaging infrastructure in the cloud for the Eclipse Hono project.
Vert.x and the IoT Components
Vert.x is a great toolkit for developing reactive applications running on a JVM.
The reactive applications manifesto fits really well for IoT scenarios where responsiveness, resiliency, elasticity, and the communication driven by messages are the pillars of IoT solutions.
Starting to work on the MQTT gateway for EnMasse using Vert.x, I decided to develop an MQTT server that was just able to handle communication with remote clients, providing an API for interacting with them. This component was used for bridging MQTT to AMQP (in EnMasse) but can be used for any scenario where a sort of protocol translation or integration is needed (i.e. MQTT to Vert.x Event Bus, to Kafka, etc.). Pay attention, though, it’s not a full broker!
The other component was the Apache Kafka client, mainly developed by Julien Viet (lead on Vert.x) and then passed to me as the maintainer for improving it and adding new features from the first release.
Finally, thanks to the Google Summer of Code. During the last two months, I have been mentoring a student who is working on developing a Vert.x-native MQTT client.
As you can see, the Vert.x toolkit is really growing from an IoT perspective — other than providing a lot of useful components for developing pure microservices-based solutions.
Eclipse Hono is a project under the big Eclipse IoT umbrella in the Eclipse Foundation. It provides a service interface for connecting large numbers of IoT devices to a backend and interacting with them in a uniform way, regardless of the device communication protocol.
It supports scalable and secure ingestion of large volumes of sensor data by means of its Telemetry API. The Command and Control API allows for sending commands (request messages) to devices and receiving a reply to such a command from a device asynchronously in a reliable way.
This project is mainly developed by Red Hat and Bosch, and I gave my support on designing all the APIs, other than implementing the MQTT adapter — but even in this case, it uses the Vert.x MQTT server component.
Because Eclipse Hono works on top of a messaging infrastructure for allowing message exchange, the main choice was using ActiveMQ Artemis and the Qpid Dispatch Router, even running them using Kubernetes and OpenShift with EnMasse.
Finally, I was involved to develop a POC named “barnabas” (a messenger character from a Frank Kafka novel) in order to take Apache Kafka running on OpenShift.
Considering the stateful nature of a project like Kafka, I started when Kubernetes didn’t offer the StatefulSets feature, doing something similar by myself. Today, the available deploy is based on StatefulSet,s and it’s a work in progress on which I’ll continue to work for, pushing the POC to the next level.
Apache Kafka is a really great project that has its own use cases in the messaging world; today, it’s more powerful thanks to the new Streams API, which allows you to execute real-time streaming analytics using topics from your cluster and running simple applications. My next step is to move my EnMasse + Spark demo to an EnMasse + Kafka (and streaming) deployment. I’m also giving my support on the Apache Kafka code.
The variety and heterogeneity of all the above projects are a lot of fun, especially because I get to collaborate with different people with different knowledge. I like learning new stuff and the great thing is that the things to learn are endless!