DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Elevate your data management. Join a lively chat to learn how streaming data can improve real-time decision-making, and how to reduce costs.

Platform Engineering: Enhance the developer experience, establish secure environments, automate self-service tools, and streamline workflows

Build Cloud resilience. High-profiled cloud failures have shown us one thing – traditional approaches aren't enough. Join the discussion.

Data Engineering: The industry has come a long way from organizing unstructured data to adopting today's modern data pipelines. See how.

Related

  • Microservices with .NET Core: Building Scalable and Resilient Applications
  • Mastering Database Unit Testing: A Full Guide and 5 Essential Tools
  • Understanding the Role of ERP Systems in Modern Software Development
  • Overcoming Challenges and Best Practices for Data Migration From On-Premise to Cloud

Trending

  • Optimizing Vector Search Performance With Elasticsearch
  • Wow, pnpm, You’re Really Fast
  • Microservices Design Patterns: Essential Architecture and Design Guide
  • Why React Router 7 Is a Game-Changer for React Developers
  1. DZone
  2. Data Engineering
  3. Data
  4. Event Sourcing With Memphis.dev: A Beginner’s Guide

Event Sourcing With Memphis.dev: A Beginner’s Guide

This is where Event Sourcing, coupled with a powerful distributed streaming platform like Memphis.dev, emerges as a robust solution and a great data structure.

By 
Yaniv Ben Hemo user avatar
Yaniv Ben Hemo
·
Dec. 13, 23 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
2.8K Views

Join the DZone community and get the full member experience.

Join For Free

In the realm of modern software development, managing and maintaining data integrity is paramount. Traditional approaches often involve updating the state of an application directly within a database. However, as systems grow in complexity, ensuring data consistency and traceability becomes more challenging. This is where Event Sourcing, coupled with a powerful distributed streaming platform like Memphis.dev, emerges as a robust solution and a great data structure to work with.

What Is Event Sourcing?

At its core, Event Sourcing is a design pattern that captures every change or event that occurs in a system as an immutable and sequentially ordered log of events. Instead of persisting the current state of an entity, Event Sourcing stores a sequence of state-changing events. These events serve as a single source of truth for the system’s state at any given point in time.

Understanding Event Sourcing in Action

Imagine a banking application that tracks an account’s balance. Instead of only storing the current balance in a database, Event Sourcing captures all the events that alter the balance. Deposits, withdrawals, or any adjustments are recorded as individual events in chronological order.

Let’s break down how this might work:

  • Event Creation: When a deposit of $100 occurs in the account, an event, such as FundsDeposited, with relevant metadata (timestamp, amount, account number) is created.
  • Event Storage: These events are then appended to an immutable log, forming a sequential history of transactions specific to that account.
  • State Reconstruction: To obtain the current state of an account, the application replays these events sequentially to compute the current balance. Each event is applied in order to derive the current balance, enabling the system to rebuild the state at any given point in time.

Leveraging Memphis in Event Sourcing

Memphis.dev, an open-source distributed event streaming platform, is perfect for implementing Event Sourcing due to its features:

  • Scalability and Fault Tolerance: Memphis’ distributed nature allows for horizontal scalability and ensures fault tolerance by replicating data across multiple brokers (nodes).
  • Ordered and Immutable Event Logs: Memphis’ log-based architecture aligns seamlessly with the principles of Event Sourcing. It maintains ordered, immutable logs, preserving the sequence of events.
  • Real-time Event Processing: Memphis Functions offers a serverless framework built within the Memphis platform to handle high-throughput, real-time event streams. Applications can process events as they occur, enabling near real-time reactions to changes in state.
  • Managing Schemas: One of the major challenges in event sourcing is maintaining schemas across the different events to avoid upstream breaks and client crashes.

Benefits of Event Sourcing With Kafka

  • Temporal Queries and Auditing: By retaining a complete history of events, it becomes possible to perform temporal queries and reconstruct past states, aiding in auditing and compliance.
  • Flexibility and Scalability: As the system grows, Event Sourcing with Memphis allows for easy scalability, as new consumers can independently process the event log.
  • Fault Tolerance and Recovery: In the event of failures, the ability to rebuild state from events ensures resiliency and quick recovery.

Let’s See What It Looks Like via Code

Events occur and are pushed by their order of creation into some Memphis Station (=topic)

Event Log

 
class EventLog:
    def __init__(self):
        self.events = []

    def append_event(self, event):
        self.events.append(event)

    def get_events(self):
        return self.events


Memphis Producer

 
from __future__ import annotations
import asyncio
from memphis import Memphis, Headers, MemphisError, MemphisConnectError, MemphisHeaderError, MemphisSchemaError
import json

class MemphisEventProducer:
    def __init__(self,host="my.memphis.dev"):
        try:
        self.memphis = Memphis()
        await self.memphis.connect(host=host, username="<application type username>", password="<password>")

    def send_event(self, topic, event):
        await self.memphis.produce(station_name=topic, producer_name='prod_py',
  message=event,nonblocking=False)
        except (MemphisError, MemphisConnectError, MemphisHeaderError, MemphisSchemaError) as e:
          print(e)
        finally:
          await self.memphis.close()


Usage

 
# Initialize Event Log
event_log = EventLog()

# Initialize Memphis Producer
producer = MemphisEventProducer()

# Append events to the event log and produce them to Memphis
events_to_publish = [
    {"type": "Deposit", "amount": 100},
    {"type": "Withdrawal", "amount": 50},
    # Add more events as needed
]

for event in events_to_publish:
    event_log.append_event(event)
    producer.send_event('account-events', event)


Criteria To Choose the Right Event Streaming Platform for the Job

When implementing Event Sourcing with a message broker, several key features are crucial for a streamlined and efficient system:

1. Persistent Message Storage

  • Durability: Messages should be reliably stored even in the event of failures. This ensures that no events are lost, and the event log remains intact.

2. Ordered and Immutable Event Logs

  • Sequential Order: Preserving the order of events is critical for accurate state reconstruction. Events must be processed in the same sequence they were produced.
  • Immutability: Once an event is stored, it should not be altered. This guarantees the integrity and consistency of the event log.

3. Scalability and Performance

  • Horizontal Scalability: The message broker should support horizontal scaling to accommodate increased event volume without sacrificing performance.
  • Low Latency: Minimizing message delivery time ensures near real-time processing of events, enabling quick reactions to state changes.

4. Fault Tolerance and High Availability

  • Redundancy: Ensuring data redundancy across multiple nodes or partitions prevents data loss in the event of node failures.
  • High Availability: Continuous availability of the message broker is essential to maintain system functionality.

5. Consumer Flexibility and State Rebuilding

  • Consumer Groups: Support for consumer groups allows multiple consumers to independently process the same set of events, aiding in parallel processing and scalability.
  • State Rebuilding: The broker should facilitate easy rebuilding of the application state by replaying events, enabling historical data retrieval.

6. Retention Policies and Archiving

  • Retention Policies: Configurable retention policies allow managing the duration or size of stored messages. This ensures efficient storage management.
  • Archiving: Ability to archive or offload older events to long-term storage for compliance or historical analysis purposes.

7. Monitoring and Management

  • Metrics and Monitoring: Providing insights into message throughput, latency, and system health helps in monitoring and optimizing system performance.
  • Admin Tools: Easy-to-use administrative tools for managing topics, partitions, and configurations streamline system management.

8. Security and Compliance

  • Encryption and Authentication: Support for encryption and authentication mechanisms ensures the confidentiality and integrity of transmitted events.
  • Compliance Standards: Adherence to compliance standards (such as GDPR and SOC2) ensures that sensitive data is handled appropriately.

9. Seamless Integration and Ecosystem Support

  • Compatibility and Integrations: Seamless integration with various programming languages and frameworks, along with support for diverse ecosystems, enhances usability.
  • Ecosystem Tools: Availability of connectors, libraries, and frameworks that facilitate Event Sourcing simplifies implementation and reduces development efforts.

Choosing a message broker that aligns with these critical features is essential for implementing robust Event Sourcing, ensuring data integrity, scalability, and resilience within your application architecture.

Event Sourcing Using a Database vs. a Message Broker (Streaming Platform)

  • Use Case Complexity: For simpler applications or where scalability isn’t a primary concern, databases might suffice. A message broker can be more suitable for higher reliability, distributed systems needing high scalability, and real-time processing.
  • Replay: In event streaming platforms or message brokers, events are stored in a FIFO manner, one after the other as they first appear. That nature also makes it easier for the consumer on the other side to understand the natural flow of events and replay the entire “scene,” whereas in databases, it is not the case, and additional fields must be added, like timestamps, to organize the data based on time. It also requires additional logic to understand the latest state of an entity.
Data integrity Database Scalability Software development Data (computing) Event

Published at DZone with permission of Yaniv Ben Hemo. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Microservices with .NET Core: Building Scalable and Resilient Applications
  • Mastering Database Unit Testing: A Full Guide and 5 Essential Tools
  • Understanding the Role of ERP Systems in Modern Software Development
  • Overcoming Challenges and Best Practices for Data Migration From On-Premise to Cloud

Partner Resources


Comments

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends: