Reevaluating the Layered Architecture
The Layered Architecture no longer deserves the status of a de-facto standard for enterprise software designs.
Join the DZone community and get the full member experience.Join For Free
The Layered Architecture (3-tiered, n-tier or multi-tier architecture) is one of the most known and used concepts in enterprise development. It is the de-facto standard for building applications; so much that it would be hard to find a single application in the enterprise software realm that does not conform to it.
Many things changed, however, since the inception of this architecture pattern; there are new ways to organize code and new ways to organize teams and operate software.
In light of these changes, it is time to re-evaluate the Layered Architecture.
The basic idea of the Layered Architecture is to split the application into at least three distinct areas of code: Presentation, Business Logic, and Persistence, where Presentation can access Business Logic and Business Logic code can access Persistence, but not the other way around. Conceptually, it looks like this:
Because of how this is organized, we call the individual parts " layers,"
Why Is it So Popular?
It is easy for developers to think of an application in terms of different technologies and to distinguish between functions based on what technology it involves. Although we "know" that the business logic is important, we tend to focus on the technology like architectures, paradigms, patterns, etc. We don't really want to understand how the business works, because it's (let's be honest) less fun. You have to talk to people, understand and consolidate different viewpoints, and come up with a model that everybody understands. It's messy. So, technology tends to dominate software development.
The Layered Architecture reflects this mindset perfectly. It decomposes the application based on technical details like presentation and persistence. It creates a separate and well-compartmentalized place for "business logic" — the spooky thing we don't really like to think about. Sometimes, we even invent specific rules for ourselves like the presentation- or persistence-agnosticism, so the business logic can't ruin our, otherwise, perfect design.
This is in stark contrast to the insights of the early software development movement that software developers need to be domain experts and actually understand what they build and why. It is in contrast to today's trends like DevOps, where the team's responsibility isn't just technical; it's also to actually operate and support a business function in its entirety.
When modern enterprise development took off in the '90s, it was quite normal to develop everything the enterprise needed in one or — at most — a few monolithic applications. The Waterfall Model was the de-facto process for software development.
Of course, the development team necessary for such projects became too big eventually to work properly, so people thought about how to scale organizationally. Having the mindset described above, they came up with the idea of splitting up the application based on technology. There was a "frontend" team, a "backend" team, maybe even a "middleware" or "database" team.
In this setting, the Layered Architecture fits very well because the interfaces between the layers could be more or less changed to remote calls. So, the different "horizontal slices" of the application could be "independently" developed.
This thinking was reflected in the early Java Enterprise versions, too, where they expected the Business Logic (the EJBs) to be accessed by remote clients, like remote web interfaces or standalone applications.
This didn't work. For one, the Web turned out to be much more than just one of the remote clients. The other problem was that nobody really distributed their applications. For scaling, it was less than optimal and was a big hassle and performance hit. Today, even Enterprise Java forgot about remote calls, and plain local object injection (CDI) replaced heavyweight remote-capable EJBs. New trends like Domain-Driven Design and microservices advocate splitting applications vertically instead of horizontally. And, new types of development processes and organization, like cross-functional and DevOps teams, support this vertical slicing and scaling much more efficiently.
Components and Reusability
When Java Enterprise first came out, the basic idea was that it would create a platform of sorts, where different vendors would create security-, persistence-, and presentation- agnostic components, which can be dropped into any application implemented on the platform.
One example was the
ShoppingCart bean. If you wanted to implement a Web-Shop, you would just download an already existing "ShoppingCart" bean from some provider, configure it, and it would be ready to be used in your application.
Needless to say, this didn't happen. The promise of reusable components, just as the idea of reusable business logic across applications didn't turn out to be practical. Modern trends reflect this insight well. The microservices approach suggests that instead of reusing code, we should separate things and make them easily replaceable. Domain-Driven Design's Bounded Context concept says the same: that there should be clearly separated contexts that create semantic boundaries, which, in turn, make sharing "business code" among contexts unnecessary and unwelcome by definition.
Separation of Concerns
One of the more practical arguments for a layered design is that it creates a Separation of Concerns. It means that the business logic to transfer money from one account to the other should not concern itself with what color it will have on the Web GUI.
This argument sounds reasonable on the surface; however, it implies more than what it says. It is almost always interpreted as all presentation-related logic should be separated out of "business logic" — not just colors and font-sizes, but everything. The "business logic" should be "pure," and should not know anything about the presentation.
This obviously does not reflect reality, as business-related things do tend to have a UI and do tend to have persistence. An
Transfer, etc. does need to be presented in addition to fulfilling other functions. How does Layered Architecture address this apparent conflict? It doesn't really. It usually uses anemic objects to push the data of an
Amount, etc. to other layers, so it can be presented or used. It smears business-related knowledge all over the application because everything has to understand the data for themselves. The Presentation needs to understand what an account is, how to ask for it from the user, how to create the object, and with what parameters. Repeat for all other layers.
Architecture and Design
The architecture of any software should be directly driven by the requirements, and the resulting design should reflect the business domain and structure thereof.
This point is so important it bears repeating. The architecture of software should mimic the natural structure of the business requirements. These requirements may, of course, include some technical ones as well, like integration to other systems, performance requirements, or non-functional requirements in general.
A Layered Architecture does not reflect the requirements, however. It is a purely technical structure, which breaks up cohesive functional units into at least three distinct pieces for the corresponding layers. This is a great cost, and it only makes sense to pay it if there is a very big gain to offset it.
Reverse Semantic Dependencies
The Layered Architecture demands that dependencies run only one way. The Business Logic knows nothing about Presentation; the Persistence knows absolutely nothing about the Business Logic.
Upon closer inspection, this statement cannot be true. The Business Logic defines the data that the Presentation receives, usually in the form of Data Transfer Objects, which are pure data structures. Any modification on the Presentation side other than trivial color changes will need additional (or less) data or data that is structured differently, or paged differently, etc. In other words, the Business Logic will respond to Presentation changes. Although there will be no physical dependency seen in the code, there will be an invisible semantic dependency that runs from Presentation to Business Logic.
The same with Persistence. The Business Logic can only use things that the Persistence Layer offers. So if a new query is needed or a new update statement for a new use-case, the usual approach is to just implement it in the Persistence and use it in the Business Logic Layer. Again, no physical dependencies are there, but there are always changes in Persistence happening because of Business Logic changes or needs.
This is why most enterprise architects and developers don't feel comfortable having "logic" in the database or exploiting all the features the database could provide. It amplifies the dissonance in the software's design, the inherent conflict between the architecture, and the actual functionality it is supposed to support. This separation of technologies simply doesn't allow the database to be smart, because all the "smartness" needs to be in the Business Logic.
It Makes Software Unmaintainable
The Layered Architecture, by localizing technology aspects, must, almost by definition, spread out the business aspects. This is great if you have more technology-related change requests and less business-related ones. For most enterprise projects out there, however, it's clearly the other way around.
If you want to change an
Account, for example, introduce a "known/unknown" flag or change the account number to an IBAN, or even support a different
Account type. You'll have to hunt down and change each and every piece of code that receives any data associated with the
Account. You'll have to change the "UI" of the
Account for sure, the Business Logic how to handle it, and, of course, the Persistence to store it. You will very likely have changes in all layers. This amount of work seems to be surprisingly large when the actual change only seems to involve a single business concept.
Everyone who had the unfortunate task to "simply add a new field to this page" in such an architecture already knows how difficult this task can be.
It Breaks Object-Orientation
This point is somewhat redundant and maybe theoretical but is worth mentioning. The Layered Architecture breaks almost all rules and idioms of object-orientation. Here are just a few:
- Encapsulation: Encapsulation does not survive crossing layers, because the interfaces between layers are defined in terms of data.
- Abstraction: There is very little to no abstraction because every layer has to understand all concepts nearly equally.
- Cohesion and Coupling: Cohesive parts of the same "thing" are broken up because of the potentially differing technologies involved. So it makes the code less cohesive and more coupled.
- Law of Demeter: Access to data, using DTOs, for example, almost always leads directly to LoD violations.
- Tell don't ask: Objects don't get told what to do in the Layered Architecture; they are asked for data, and then, things happen with that data somewhere else out of the control of the object producing or holding the data.
We all have, at some point or another, worked on, or even designed, software with the Layered Architecture. Things change, however, and it is time to realize that the Layered Architecture is, perhaps, not as applicable as previously thought. It no longer deserves the status of a de-facto standard for enterprise software designs.
What are the alternatives then? Although there are many alternative patterns, there should be no grand design to rule them all. Each design should reflect the requirements of the particular software and should be built following the concepts and terminology of the business.
Published at DZone with permission of Robert Brautigam. See the original article here.
Opinions expressed by DZone contributors are their own.