All About Event Emitting Microservices (EEMs)
All About Event Emitting Microservices (EEMs)
Services just being atomic and isolated is not enough. Sometimes, they need to notify their outcome. This is where EEMs come in.
Join the DZone community and get the full member experience.Join For Free
"Microservices" is the new buzz word. Since that buzz, I have been following its patterns and approaches. While it definitely has positive sides when adopting in enterprise architecture, I have been thinking about a modified version of it called Event Emitting Microservices (EEMS). You might know where this is going. Yes, I am talking about microservices that always emit events after they are done with their atomic job, irrespective of the status.
I am not saying that this is a new idea. Many smart minds out there might have already implemented services in this fashion. But wouldn’t it be great if this becomes one of the common microservices patterns or standards? Event processing is the most popular way things are going on these days and makes up the streaming analytics feeds on events. This approach would play a role in that, too. Aside from that, services just being atomic and isolated is not enough. Sometimes, they need to notify their outcome. They have to be expressive in nature, too.
Now, let’s see what can we achieve by incorporating the event emitting nature into microservices.
- We can overcome the lack of transaction management in SOA and RESTful designs.
- Multiple subscribers can consume the event at same time and cascade their own event chains.
- We can perform data analysis for predictions, proactiveness, business monitoring, transaction monitoring, etc.
What do I mean by above three? Let's go into more detail:
It is a known fact that calls to services over HTTP are not transactional in nature. For example, RESTful invocations cannot be maintained as a typical transaction where we can easily commit or roll back a couple of service calls. That is why SOA and REST always have some minimal design issues, no matter how well the architects design. To overcome this, big giants like IBM, Microsoft, etc., along with OASIS, have come up with many web service standards like WS-TX (Webservices transaction), WS-ReliableMessaging, WS-Addressing, etc., to save SOA and make it robust.
Unfortunately, not many organizations incorporate these standards in their designs–for many valid reasons. For example, for standards like reliable messaging, addressing needs some features to be enabled on both provider and consumer sides. As part of this, there might be some need to change the way the providers are implemented to support these standards, which may not be accepted by some of the departments in the organization because the changes to these web services might disturb the ecosystem. So, some new web services will have to be created to support this need.
To avoid this and proceed with normal service calls, some crazy stuff like Resubmission or Auto retry capabilities need to be pulled into enterprise design. Though they are good and useful in some scenarios, they cause more chaos than benefit from what I saw.
So, how can EEMS overcome this transaction issue?
Basically, it is an EDA approach that guarantees and promises that any given job will be processed successfully and continue further. In other words, if any step fails while running a process, then it will be retried for success and then continue the flow with next subsequent steps. It is not a typical retry behavior we see, except in BPM. This way, we need not rollback the previous steps as the job completion is guaranteed.
To be able to do this, invocation of microservices should not be done in a single composite service because it will be difficult for a retry or resubmission framework to continue execution from the point of failure. Instead, they should be invoked in an event-driven approach. For example, say there are three service invocations involved to accomplish a job. We make a call to Service1 directly. If Service1 is an Event Emitting Micro Service (EEMS), then it will emit an event stating its success or failure. Of course, in both cases, relevant details will have to be included in the emitted event. For example, the success case should hold the outcome results if applicable, as should the failure case.
Then, there should be subscribers to these events to perform relevant action. For example, in the case of success, the subscribed service should invoke Service2 using the outcome of Service1. And this continues.
By doing this way, there is no need to involve transaction management, as there is a clear scope to continue the flow from any point of failure. Events will cascade the service invocations.
There is one limitation with this approach. In the current IT trend, this case fits in most of the use cases. However, if the use case is strictly interactive, then responses are expected immediately–and they should be prompt. So, these cases should be handled separately. For use cases that have to be synchronous in nature, the only way is to have transaction management. They have to adopt one of the popular transaction management approaches for SOA and RESTful services like TCC (Try, Confirm, or Cancel). You can get more information on this here.
Concurrency and Multi-Tasking
This is fairly self-explanatory. As you see, there is an opportunity to have multiple subscribers for an event. So, mutually exclusive services or use cases can all subscribe to an interested event and continue their flow in a parallel fashion. This also adopts the concurrency nature in the design where necessary.
Data Analytics and Predictions
We are living in the world of data. Until now, we have seen lots of approaches in development like Test Driven Development (TDD), Domain Driven Development (DDD), etc. But now, the new trend is Data Driven Development. Every organization is interested in knowing stuff.
I read a nice analogy somewhere regarding this. In current IT, data is being used like oil to so-called analytic engines to produce an energy called "information." And what is information? Information is wealth. So DDD has to do with setting the focus on capturing all of the data about what is happening around and inside the systems or environment. With EEMS, we have support for that, too. All of the emitted events can be fed to a powerful messaging system like Apache Kafka and it can route it to a distributed file system like HDFS. From there, big data framework will take care of it.
Overall, having an event emitting nature to a microservice has some benefits after all and, in my opinion, having it as part of design from the initial stages itself does only good. Some of the above achievements or advantages are more likely inherited from EDA, but they add even more value when incorporated into microservices. We have more fine-grained control this way on what to do with the emitted events.
You need not worry about the factors like velocity, volume, and variety of these events. We are in the big data world now. These "three Vs" can be handled by any popular big data framework. Whether to process the event or to ignore it will be handled by the filters and analyzers that are part of these frameworks.
I hope to make some sense out of this. I am open to hear any improvements that can be made to make this better or any limitations that I overlooked.
Published at DZone with permission of Prasad Pokala , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.