DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • JQueue: A Library to Implement the Outbox Pattern
  • Why Do You Need to Move From CRUD to Event Sourcing Architecture?
  • A Robust Distributed Payment Network With Enchanted Audit Functionality - Part 2: Spring Boot, Axon, and Implementation
  • A Robust Distributed Payment Network With Enchanted Audit Functionality - Part 1: Concepts

Trending

  • The End of “Good Enough Agile”
  • MCP Servers: The Technical Debt That Is Coming
  • GitHub Copilot's New AI Coding Agent Saves Developers Time – And Requires Their Oversight
  • Scaling Microservices With Docker and Kubernetes on Production
  1. DZone
  2. Data Engineering
  3. Databases
  4. Event Sourcing in .NET Core: A Gentle Introduction

Event Sourcing in .NET Core: A Gentle Introduction

By 
David Guida user avatar
David Guida
·
Updated Dec. 02, 20 · Tutorial
Likes (6)
Comment
Save
Tweet
Share
42.3K Views

Join the DZone community and get the full member experience.

Join For Free

Event sourcing, aka "the great myth". I've been thinking about writing a series of articles about this for a while, and now it's time to put my hands back on the keyboard. 

I thought that with this long period of confinement at least I could have had more time to write some nice articles, but it turns out the reality has been slightly different so far.

Anyways let's get back in track! Event sourcing. It's probably one of the hardest things to code, immediately after two other things.

Everything that happens around us is an event of some sort. The cake is ready in the oven. The bus has arrived at the stop. Your cellphone's battery runs out. And for every event, there might be zero or more actors reacting to it. Cause and effect, we could say.

So how does it translate for us? Event sourcing, at its heart, basically means storing all the events occurring on our system in a timely-ordered fashion. All of our write operations are appending to a log-like persistence storage and that's it. Events can only be appended. Not updated or deleted.

Then what? How do we query our data? Here we get the reaction part. 

Event sourcing has a very important pre-requisite: CQRS. All the read operations have to be performed on a different datastore, which is in turn populated by the appropriate event handlers.

I know it might sound a bit complex (and actually it is), so let's try with an example. 

Imagine you're writing the software for a bank. The system can:

  1. Create customers.
  2. Create accounts for the customers.
  3. Withdraw money from an account.
  4. Deposit money on an account.

Armed with this info, we can start modeling our commands:

  1. Create a customer.
  2. Create an account for a customer.
  3. withdraw money from an account.
  4. deposit money on an account.

We'll keep it simple and won't be dwelling much into domain-specific details like currency conversion and the like. Although DDD is another aspect that is essential to our success (and we discussed it already on my blog).

Let's see our queries now:

  1. Archive of customers, each with the number of open accounts.
  2. Customer details with the list of accounts, each with its balance.
  3. List of transactions on an account.

At 10,000 feet, the system looks more or less like this:

Events get pushed into the Write side, which basically does two things: 

  • Appends them to a storage system.
  • Pushes integration events to a queue.

Eventually, the integration events will be captured and consumed by the relative handlers on the Query side, materializing all the Query Models our system needs.

Now, why in the world one would even think about implementing a system like this? Well, there are quite a few good reasons.

Keeping track of what happens in an append-only storage allows us to replay events and rebuild the state of our domain models at any time. In case something bad occurs, we have an almost immediate way to understand what went wrong and possibly how to fix the issue.

Performance and scalability. The Query Models can be built with whatever technology fits the needs. Data can be persisted in a relational database, in a NoSQL one or just plain HTML. Whatever is faster and more suited for the job. Moreover, if the business needs change we can quickly adapt and generate completely new forms of the models.

Moreover, the Query DBs can be wiped out and repopulated from scratch by simply replaying all the events. This gives the possibility to avoid potentially problematic things like migrations or even backups since all you have to do is just run the events again and you get the models back.

So where's the catch? Well, there are a few drawbacks as well. We'll talk about them in another post of this series. 

Now, let's see how we can start storing events in our system. As usual, I have prepared a small demo, modeled around the banking example I depicted before. Sources are available here.

We’re trying to write a system that appends events to a log-like persistent storage using a CQRS approach. Query models are stored in separate storage and built at regular intervals or every time an event occurs.

Events can be used for various reasons, like tracing the activity on the platform or rebuilding the state of the domain models at any specific point in time.

There are several options for storing events: we could use a big, massive table in a SQL DB, a collection in NoSQL, or a specialized ad-hoc system.

For this demo, I decided to go for the latter and give a chance to EventStore. From its home page:

Event Store is an industrial-strength event sourcing database that stores your critical data in streams of immutable events. It was built from the ground up for event sourcing.

It has decent documentation, good community, and was created by the legend, Greg Young. For those who don’t know him, he coined the term “CQRS.” I guess that’s enough.

Now, in our example we have these requirements:

  1. Create customers.
  2. Create accounts for the customers.
  3. Withdraw money from an account.
  4. Deposit money on an account.

The first thing to do, as usual, is to start modeling our domain. For the first one, the Customer class encapsulates more or less all the responsibilities.

As you can see, the class inherits from a BaseAggregateRoot class, which is implementing this interface:

C#
 




xxxxxxxxxx
1
11


 
1
public interface IAggregateRoot<out TKey> : IEntity<TKey>
2
{
3
    public long Version { get; }
4
    IReadOnlyCollection<IDomainEvent<TKey>> Events { get; }
5
    void ClearEvents()    
6
}
7

          
8
public interface IEntity<out TKey>
9
{
10
    TKey Id { get; }
11
}



We saw something similar in a previous post about the Outbox Pattern. The key difference here is that we’re storing a Version along with the events. It will be handy on several occasions, especially when resolving conflicts during writes or when building the query models.

Creating a Customer is quite simple (code omitted for brevity):

C#
 




xxxxxxxxxx
1
10


 
1
public class CreateCustomerHandler : INotificationHandler<CreateCustomer>
2
 {
3
        private readonly IEventsService<Customer, Guid> _eventsService;
4

          
5
        public async Task Handle(CreateCustomer command, CancellationToken cancellationToken)
6
        {
7
            var customer = new Customer(command.Id, command.FirstName, command.LastName);
8
            await _eventsService.PersistAsync(customer);
9
        }
10
}



As you can see we’re directly creating the Customer model and persisting it. The Command handler is not validating the command; this concern has been extracted and executed by another class.

The next step is to create an Account for this Customer:

C#
 




xxxxxxxxxx
1
15


 
1
public class CreateAccountHandler : INotificationHandler<CreateAccount>
2
{
3
        private readonly IEventsService<Customer, Guid> _customerEventsService;
4
        private readonly IEventsService<Account, Guid> _accountEventsService;
5
6
        public async Task Handle(CreateAccount command, CancellationToken cancellationToken)
7
        {
8
            var customer = await _customerEventsService.RehydrateAsync(command.CustomerId);
9
            if(null == customer)
10
                throw new ArgumentOutOfRangeException(nameof(CreateAccount.CustomerId), "invalid customer id");
11
          
12
            var account = new Account(command.AccountId, customer, command.Currency);
13
            await _accountEventsService.PersistAsync(account);
14
        }
15
}



Here, we have to load (rehydrate) the Customer first. Of course, we cannot (and should not) rely on the Queries persistence layer as it might be not in sync.

The IEventsService implementation of PersistAsync() has a quite important role: it will request our persistence layer ( Event Store ) to append the events for the aggregate and will publish its integration events. We’ll talk more about this in the next article of the series.

The Events Repository instead is responsible for appending events for an Aggregate root and rehydrating it.

As you can see from the code, the append operation is opening a transaction, looping over the domain events and persisting them.

Event Store is structured over the concept of “streams”. Every aggregate is represented by a single stream, identified by the Aggregate type and key, for example, “Customer_540d1d96-3655-43a4-9078-3da7e7c5a3d2”.

When rehydrating an entity, all we have to do is build the stream name given the key and the type and then fetch batches of events starting from the first one ever.

Event Store also supports snapshots, basically “a projection of the current state of an aggregate at a given point“. They can be used to improve the time taken to build the current state by preventing loading all the events from the beginning. I haven’t implemented this technique in the demo yet, probably I’ll add it in the next weeks.

That's enough food for thought. In another article, we'll see one technique to broadcast the events to interested parties and rebuild the Query Models.

If you’re working on Azure, don’t miss my other Articles!

Event Database Relational database Event store GENtle

Opinions expressed by DZone contributors are their own.

Related

  • JQueue: A Library to Implement the Outbox Pattern
  • Why Do You Need to Move From CRUD to Event Sourcing Architecture?
  • A Robust Distributed Payment Network With Enchanted Audit Functionality - Part 2: Spring Boot, Axon, and Implementation
  • A Robust Distributed Payment Network With Enchanted Audit Functionality - Part 1: Concepts

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!