Event-Driven Architecture: 5 More Myths
Event-driven architecture is filled with claims. This article discusses EDA myths common in the industry of designing microservices and business transactions.
Join the DZone community and get the full member experience.Join For Free
In a previous post, I discussed what event-driven architecture (EDA) is and the common claims associated with it. Since EDA is not a common concept and has been around in the industry for over 20 years now (yes that's right!), it has accumulated a lot of claims associated with it and, over the years, some of these claims have been busted as myths or proven as facts.
In this post, I will discuss 5 more claims about EDA and providing an argument as to whether each is a myth or a fact. Let's get started!
1. Changes in EDA Systems Can Lead to Cascading Failures
Event chaining is very common in EDA systems whether it's in a microservices architecture where one microservice triggers an event to multiple microservices or in operational use-cases like e-commerce solutions, supply chain, or activities in the financial industry. This phenomenon is known as the pinball machine effect, where one event triggers a chain of events with varying side effects. As you can see, this pinball machine effect increases drastically with an increase in architecture complexity. Embarking on a digital transformation journey to become more event-driven and real-time without awareness about the issue will result in some serious technical debt.
Leveraging appropriate tooling and design principles in advance will help in alleviating this issue. One way to avoid the pinball machine effect is for the following:
- Inspect the events in the system at runtime,
- Have an extensive documentation strategy breaking down all the events, topic subscriptions, and publishing, along with the payload of every event
Both of those approaches come with their own set of challenges along with a training strategy that needs to be set in place for every new person joining the team. Another way to deal with the pinball machine effect is to leverage an Event Portal that acts as a one-stop shop for managing an event-driven system. An event portal lets you scan a live system, visualize the events, topics, and applications, and provides a high-level overview of the overall EDA system showing how changes could cascade within application domains or interconnected business units.
2. To Have an Event Streaming System Apache Kafka Must Be Used
It makes the most sense to use Apache Kafka for what it was designed for: a distributed commit log that was designed for aggregating log data and streaming it to analytics engines and big data repositories. EDA is a bigger umbrella term that covers more generic use cases to streaming and operational use-cases. These operational use-cases to event streaming systems have very different characteristics and require tools that are useful beyond streaming analytics and logging data.
Advanced and modern event brokers take into account other behaviors required in EDA such as complex topic filtering for streamed data, flexible message routing, high availability, disaster recovery, and security to say the least. I would recommend this one blog post that covers when to use Apache Kafka vs other message brokers depending on the use case: Why You Need to Look Beyond Kafka for Operational Use Cases.
3. EDA Is Not for Low Throughput Use-Cases
At its core, the benefits of using an EDA system include:
- Decoupling microservices
- Implementing real-time processing
- Distribution of responsibility
- Effective alert management across business units about situations that demand instant attention
As you can see there is no strict dependency on high throughput use-case. While using EDA for high throughput processing has its benefits, there are other use-cases where a streaming architecture is helpful. Another low throughput use case EDA could be a good architectural decision to adopt is to implement command query responsibility segregation (CQRS). For example, command and control messages are sent to edge devices in manufacturing plants located in different geographical locations
The benefits of adopting a streaming architecture as opposed to REST-based implementations could lead to meeting business objectives like reducing cost, improving customer experience, and increasing corporate agility and these objectives are not always tied to high throughput use-cases.
4. A Dynamic Hierarchical Topic Structure on Which Events Are Sent at Is Challenging to Achieve
It is topic design best practices to include business operations or properties in the topic hierarchy. Predicting how your organization’s architecture and business needs will evolve over time is challenging. And having a static detailed topic hierarchy further adds to the challenge by having a trickle-down effect that impacts all applications directly coupled with the topic if things change. To avoid challenges around choosing the right topic hierarchy, topic versioning and documentation are crucial. A good communication strategy between all the different stakeholders involved in consuming the topic whether it’s found in Excel sheets, internal documentation, or organizational Wiki pages such as Confluence is also important.
Beyond designing a topic hierarchy, one will also need to build, manage, and govern the topic structure and ensure that necessary alerting is in place for the affected applications and microservices. Consolidating all things related to your EDA including the topic hierarchy in an event portal that acts as a one-stop-shop for things EDA is the first step to alleviating challenges like this.
5. EDA Can Prevent Cascading Failures in Microservices Architecture
When compared to a REST-centric microservices architecture, an event-driven architecture will prevent any error on the consuming end to cascade back to the producer. This is due to the implied coupling between the request-response nature of synchronous interactions in a REST-based architecture resulting in a cascading failure behavior. In EDA, having an event broker in the mix allows for the enqueuing of messages so one consumer’s inability to receive/process an event will not cascade back to the producer. With the ability to queue up messages, a subscriber can resume message processing after failure recovery and eventually reach consistency across the system. An event broker insulates the producer of messages from the consumers and in return eliminates the possibility of message loss in the case of temporary service failures, redeploys, or downtime in any of the services.
BONUS: Implementing a Business Workflow in an EDA Way Is More Complex than Using a Workflow Tool Like Camunda
Thanks to the DZone user, Frank Baier, who mentioned this myth in Part 1 of the series. According to Frank, business transactions are natural to the way EDA concepts behaves. So for example, a workflow that involves a customer-side update resulting in multiple events in the backend to handle this update is similar to a broker-centric architecture involving publishing and subscribing of events. Frank also mentioned SAGA transaction patterns which advocate for the use of a message broker to handle the complexity of business workflows.
Do you have OTHER claims that you would like me to bust or confirm? Comment down below and I would be more than happy to hear your input and thoughts on this.
Published at DZone with permission of Tamimi Ahmad. See the original article here.
Opinions expressed by DZone contributors are their own.