Over a million developers have joined DZone.
Platinum Partner

What I have learned at DDD Day

· Agile Zone

The Agile Zone is brought to you in partnership with Hewlett Packard Enterprise. Discover how HP Agile enterprise solutions can help you achieve high predictability and quality in your development processes by knowing the status of your projects at any point in time.

DDD Day is an Italian event totally dedicated to Domain-Driven Design, an approach for software development in complex domains such as banking, insurance, transportation systems and so on. DDD Day is organized and attended mostly by a .NET audience, which usually works on software related to enterprise domains.

However, DDD is also a methodology and prescribes pattern for the collaboration of teams and for crunching the knowledge coming from the domain experts. I held a PHP talk (thus technical), but I have learned much from the other talks from Greg Young, Andrea Saltarello, Alberto Brandolini and Andrea Canegrati.

What changed from the book

Evans's seminal book on Domain-Driven Design was written back in 2004, where static, imperative object-oriented languages were the cool things; there were no NOSQL solutions as a valid alternative to the relational database.

I was glad to learn that the book is still considered valid and does not need a revision since the patterns or the approach have not been reconsidered. However, the problem with the book is that it does not put emphasis on powerful concepts like context mapping.

I also like the summary of the Domain Model patterns in 30 seconds by Alberto Brandolini, which went like this: Entities are immediately understood by developers: they're table rows. Also Value Objects are not difficult to get: they're like strings. Factories... well, we have the GoF books which is full of factories, do you want an abstract one? I can make you even a builder... But when it comes to aggregates, the difficulty of DDD is revealed.


Greg Young made his famous presentation on CQRS and Event Sourcing. I think that the basic point is that assumptions about your system do not have to be strictly followed.
For example, you are not required to use DTOs. You are not required to use the same system for write and reads at all, freeing your Domain Model from lots of getters just used for reporting.

CQRS separates command from queries at the system level, using a Domain Model for writes and a reporting system for the reads which skips the Domain Model and the relational to object conversion where present. You are not even required to use the same database for write and reads: denormalized rdmbs views are just an example of what you can use for reads as they adapt well to this kind of operation.

Going on, you are not even required to store as the primary data source the current state of an aggregate: store the events that have happened instead. The classical example is a checking account: you store the transactions executed on it, and the initial state (0€); these data are infinitely more interesting than the  current state. The current state can always be rebuilt starting from the events, and event sourcing approaches any write on an aggregate as an event to be stored.

Storing all the events open up the possibility of replaying events from the past while applying some new behavior, to see what would have happened or to make it happen. As an example, you could apply a discount or a new policy retroactively to all invoices of the past year...

How to make your DBA go mad

I learned I should not share my database as a point of integration: the collaboration between applications (bounded contexts) has to be thought about explicitly. Simply sharing the database will constrain the evolution of your domain if you're on the weak side (and with a DBA checking your schema, you always are.)

Duplication of data is not really bad, unless there is also a duplication of behavior between classes. A classical example from Brandolini is that duplication of data is actually good if it has different lifetimes that should be separated. For instance, the customer address should be copied in orders, since if the customer changes billing/shipping addresses the older orders shouldn't replicate the change. We have a bias towards overnormalization even in these cases, when simply duplicating a piece of data that could diverge simplified the model very much.

DDD and validation

The never ending flame on the Internet on validation in DDD is caused by the lack of distinction between invariants and context validation.

Invariants are validation that should always be performed, regardless of the activity the object is performing. For example, required fields like a mandatory name are invariants.

With invariants, the always valid approach is the most efficient: validation is performed at construction time and on state changes, which always lead to a valid aggregate/object (or to an exception).

Context validation instead is performed before or after a particular activity; fore example the presence of a non-expired gold credit card for accessing a discount has only to be checked within the scenario where that discount is of interest. On demand validation works better for context-based rules: an external object performs validation inside a method of the domain object.

Note that double dispatch can be used to include an external service into a Factory Method (always valid) or into a validation method. This solution avoids exposing private fields from the domain object, which instead pass them to the validation methods.

Context mapping

Context mapping consists in a series of patterns for identifying which kind of collaboration between applications is present in your software solutions. Example of external applications are other applications to integrate, legacy systems, web services api, auditing mechanisms, or data sources from the government and new laws.

The map comprehends the various contexts, which have different domain languages and concepts; moreover, it establish the relationships between them. Even when a model isn't expressed as a set of classes in Java code, it still exists. Typically there are three models in a collaboration: the two of the two bounded contexts plus the one they use for communication.

Each collaboration has usually a downstream and upstream side, where one of the two models has more power to change the other. Examples of relationships are:

  • Published Language, where an external domain language or Api is established between models; documentation and fixes for this language are outsourced to a standardizing organization.
  • Anti-Corruption Layer, a famous costly (but effective) set of Facades and Adapters that hide the translation between two models. The upstream model can change at will, but the Anti-Corruption Layer stops these frequent changes from leaking into the other one.
  • Consumer/Supplier, where one of the two models is the source of truth and the other must adapt to it continuously.

A validation of your scenario is that you can't do DDD in your model if you're downstream from two other applications. By creating a context map you can discover if your core domain is in the right position or if it suffers from the smells in other systems.

The Agile Zone is brought to you in partnership with Hewlett Packard Enterprise. Learn more about driving business innovation by leveraging Agile quality lifecycle strategies.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}