A pull model for Event Stores
A pull model for Event Stores
Join the DZone community and get the full member experience.Join For Free
Download "Why Your MySQL Needs Redis" and discover how to extend your current MySQL or relational database to a Redis database.
In the Domain-Driven Design terminology, an Event Store is a persistence mechanism for Domain Events, which are used mainly to let different Bounded Contexts (application mapped over several subdomains) communicate.
Why an Event Store
Applying events over multiple dependent applications in a single transaction is not a scalable solution: only a single Aggregate, a small group of objects, has to contain consistent data at any time. Consistency between different Aggregates of the same Context or between multiple Context is eventually guaranteed by publishing events from the source of information that can travel to the consumers wanting to be update.
When communicating with events that flow in a single direction instead of having a conversation, applications become decoupled in time as the instant rate of production may be different from the instante rate of consumers. Only the producing systems has to handle spikes in traffic, while the subscribers can even be not available for certain amounts of time and retrieve the messages meant for them when they come back online.
Biased by this terms, it was simple enough for me to assume that a push subscribing mechanism has to be put in place. ActiveMQ, or ZeroMQ, or another message queue implementation can provide connectors even for applications written in different languages and provide some guarantees on ordering, persistence of messaging and delivery retrial:
Bounded Context A -> Message Queue -> Bounded Context B
This can scale nicely to multiple consumers by using Topic Queues, where consumers are not competing for messages but they all receive the same ones:
Bounded Context A -> Topic Queue -> Bounded Context B -> Bounded Context C
Introducing an Event Store
However, it is possible to separate the Event Store from the delivery mechanisms for several reasons, such as using them as an audit log and even as the main form of state persistence of the application in an Event Sourcing way:
Bounded Context A -> Event Store -> Topic Queue -> Bounded Context A
The classical example is that of a Bank storing all transactions instead of the balances of accounts, since each balance can be reconstructed by the series of events happened to it since opening. The Event Store itself can be built over a general purpose database or chosen as an off-the-shelf components such as Greg Young's .NET implementation.
The requirements for such a store are:
- handle as many writes per second as possible, since this features enables fine-grained Events.
- being able to query Events, (possibly) setting up several streams of events to query a particular kind or tag.
- Store heterogeneous data as Event classes differ from each other.
while there are many trade-offs that we can make due to the nature of the Domain Events pattern:
- Events are immutable and once inserted they cannot be modified.
- Each Event insertion does not have to wait for locking on other insertions.
- Events can be published with a specified delay, so we don't have to be immediately consistent.
A pull model
It surprised me to learn that pushing events to other Bounded Context is not the only integration mechanism, especially for REST-like and RESTful services. The source of this valuable integration pattern was Vaugh Vernon's Implementing Domain-Driven Design.
Given an Event Store that persists all events produced by the applications, it can be published over two kinds of URLs:
- /notifications/0,19 which is the archived log for events.
- /notifications/current which is the current log containing the last occurred events; its range is always smaller than 20 events (e.g. 640,659)
I'm choosing 20 as an example constant for items on a page. Each representation produced by GET over this URLs contains several Link HTTP headers:
- Link: <.../notifications/600,619>; rel=previous
- Link: <.../notifications/620,639>; rel=self
- Link: <.../notifications/640,659>; rel=next
Once one of these resources gets full (containing 20 events), it is cacheable for an infinite time and and set caching headers accordingly:
With standard HTTP mechanisms such as Reverse Proxies or event client-side caches every application can poll the Event Store very efficiently. If you are in the situation of having too many reporting queries over your databases, an Event Store is a way of turning many of these queries into immutable results that can be cached forever. On the other hand, your consumers will have to perform a little more work such as persisting the last seen event or resource to resume the next polling; and to traverse even ill-interesting events due to the broader granularity of the Event Store APIs. There's no such thing as a free lunch.
Scaling the store
MongoDB is a popular technology for the storage of heteregenous data such as lots of events. To scale writes, it uses shards so that multiple machines can handle inserts instead of a single primary.
However, the problem becomes how to produce the REST resources containing the events. Here is how such a scheme could work by considering the possibility of providing events asynchronously.
- /notifications/current generates the up_to date, equal to 1 minute ago.
- It considers only data created_at < up_to in its queries and counts the total.
- The modulo N = total % 20 is the number of events to show in the current feed.
- To select the data, the cursor is sorted in descending ObjectId order (ObjectId contains a timestamp with seconds precision).
- The links to include as HTTP headers are produced: self is notifications/$LastDocumentIdOfN,20 while previous is notifications/-20,$nextDocumentId. To know $nextDocumentId, N+1 documents have to be selected.
The peculiarity of MongoDB pagination is that you cannot use skip() to start from the Mth document of the collection, as it would need to select all the previous documents. Providing a link to the next and previous page must wor with ObjectIds that are indexed.
There are several points to solve in this approach - both sorting and counting are global operations over shards; however providing a strong caching layer on top of these requests is the strategy to bet on for better performance.
Opinions expressed by DZone contributors are their own.