Event Sourcing in .NET Core: A Gentle Introduction

DZone 's Guide to

Event Sourcing in .NET Core: A Gentle Introduction

In this article, we dive into theory behind Event Sourcing and how to implement it in a basic application with .NET Core.

· Web Dev Zone ·
Free Resource

Event sourcing, aka "the great myth". I've been thinking about writing a series of articles about this for a while, and now it's time to put my hands back on the keyboard. 

I thought that with this long period of confinement at least I could have had more time to write some nice articles, but it turns out the reality has been slightly different so far.

Anyways let's get back in track! Event sourcing. It's probably one of the hardest things to code, immediately after two other things.

Everything that happens around us is an event of some sort. The cake is ready in the oven. The bus has arrived at the stop. Your cellphone's battery runs out. And for every event, there might be zero or more actors reacting to it. Cause and effect, we could say.

So how does it translate for us? Event sourcing, at its heart, basically means storing all the events occurring on our system in a timely-ordered fashion. All of our write operations are appending to a log-like persistence storage and that's it. Events can only be appended. Not updated or deleted.

Then what? How do we query our data? Here we get the reaction part. 

Event sourcing has a very important pre-requisite: CQRS. All the read operations have to be performed on a different datastore, which is in turn populated by the appropriate event handlers.

I know it might sound a bit complex (and actually it is), so let's try with an example. 

Imagine you're writing the software for a bank. The system can:

  1. Create customers.
  2. Create accounts for the customers.
  3. Withdraw money from an account.
  4. Deposit money on an account.

Armed with this info, we can start modeling our commands:

  1. Create a customer.
  2. Create an account for a customer.
  3. withdraw money from an account.
  4. deposit money on an account.

We'll keep it simple and won't be dwelling much into domain-specific details like currency conversion and the like. Although DDD is another aspect that is essential to our success (and we discussed it already on my blog).

Let's see our queries now:

  1. Archive of customers, each with the number of open accounts.
  2. Customer details with the list of accounts, each with its balance.
  3. List of transactions on an account.

At 10,000 feet, the system looks more or less like this:

Events get pushed into the Write side, which basically does two things: 

Eventually, the integration events will be captured and consumed by the relative handlers on the Query side, materializing all the Query Models our system needs.

Now, why in the world one would even think about implementing a system like this? Well, there are quite a few good reasons.

Keeping track of what happens in an append-only storage allows us to replay events and rebuild the state of our domain models at any time. In case something bad occurs, we have an almost immediate way to understand what went wrong and possibly how to fix the issue.

Performance and scalability. The Query Models can be built with whatever technology fits the needs. Data can be persisted in a relational database, in a NoSQL one or just plain HTML. Whatever is faster and more suited for the job. Moreover, if the business needs change we can quickly adapt and generate completely new forms of the models.

Moreover, the Query DBs can be wiped out and repopulated from scratch by simply replaying all the events. This gives the possibility to avoid potentially problematic things like migrations or even backups since all you have to do is just run the events again and you get the models back.

So where's the catch? Well, there are a few drawbacks as well. We'll talk about them in another post of this series. 

Now, let's see how we can start storing events in our system. As usual, I have prepared a small demo, modeled around the banking example I depicted before. Sources are available here.

We’re trying to write a system that appends events to a log-like persistent storage using a CQRS approach. Query models are stored in separate storage and built at regular intervals or every time an event occurs.

Events can be used for various reasons, like tracing the activity on the platform or rebuilding the state of the domain models at any specific point in time.

There are several options for storing events: we could use a big, massive table in a SQL DB, a collection in NoSQL, or a specialized ad-hoc system.

For this demo, I decided to go for the latter and give a chance to EventStore. From its home page:

Event Store is an industrial-strength event sourcing database that stores your critical data in streams of immutable events. It was built from the ground up for event sourcing.

It has decent documentation, good community, and was created by the legend, Greg Young. For those who don’t know him, he coined the term “CQRS”, I guess that’s enough.

Now, in our example we have these requirements:

  1. Create customers.
  2. Create accounts for the customers.
  3. Withdraw money from an account.
  4. Deposit money on an account.

The first thing to do, as usual, is to start modeling our domain. For the first one, the Customer class encapsulates more or less all the responsibilities.

As you can see, the class inherits from a BaseAggregateRoot class, which is implementing this interface:


We saw something similar in a previous post about the Outbox Pattern. The key difference here is that we’re storing a Version along with the events. It will be handy on several occasions, especially when resolving conflicts during writes or when building the query models.

Creating a Customer is quite simple (code omitted for brevity):


As you can see we’re directly creating the Customer model and persisting it. The Command handler is not validating the command; this concern has been extracted and executed by another class.

The next step is to create an Account for this Customer:


Here, we have to load (rehydrate) the Customer first. Of course, we cannot (and should not) rely on the Queries persistence layer as it might be not in sync.

The IEventsService implementation of PersistAsync() has a quite important role: it will request our persistence layer ( Event Store ) to append the events for the aggregate and will publish its integration events. We’ll talk more about this in the next article of the series.

The Events Repository instead is responsible for appending events for an Aggregate root and rehydrating it.

As you can see from the code, the append operation is opening a transaction, looping over the domain events and persisting them.

Event Store is structured over the concept of “streams”. Every aggregate is represented by a single stream, identified by the Aggregate type and key, for example, “Customer_540d1d96-3655-43a4-9078-3da7e7c5a3d2”.

When rehydrating an entity, all we have to do is build the stream name given the key and the type and then fetch batches of events starting from the first one ever.

Event Store also supports snapshots, basically “a projection of the current state of an aggregate at a given point“. They can be used to improve the time taken to build the current state by preventing loading all the events from the beginning. I haven’t implemented this technique in the demo yet, probably I’ll add it in the next weeks.

That's enough food for thought. In another article, we'll see one technique to broadcast the events to interested parties and rebuild the Query Models.

ddd ,dotnet ,event sourcing ,event store ,kafka ,message queues ,microservice ,mongodb ,software architecture

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}