Architecture for Continuous Delivery
Architecture for Continuous Delivery
Learn about building a software architecture that enables continuous delivery, like microservices and functional cohesion.
Join the DZone community and get the full member experience.Join For Free
This is part 3 in a series on continuous delivery. In parts one and two, we introduced you to the concept of continuous delivery and how you can prepare your organization before adopting CD practices.
In this article, we're going to discuss architecture for continuous delivery. How do we architect our systems in a way that enables us to continuously deliver value to our customers?
As we discussed in previous articles in the series, continuous delivery is the ability to get changes all kinds of changes to production whether they are feature changes, bug fixes or experiments into production etc.
The first architectural challenge of continuous delivery is monoliths, i.e. software that exists as one cohesive set or unit that needs to be built and shipped together. All the deployable artifacts need to be brought together into production all at once. Moreover, the code bases of monoliths usually tend to have long lived branches and we need to merge our code base to a certain "master" branch before we can ship it. In such a development process, you'll usually see that the test process probably takes a couple of weeks or months to get done.
In the previous post, I recommended splitting the code base into multiple repositories even if it is still a monolithic application. In this article, I will take you through a better long-term approach, i.e. splitting into microservices.
Breaking the Monolith
How can we break the monolith down and how can we ship the product multiple times a day? As I highlighted in the previous post, the goal of continuous delivery is not to ship multiple times a day, but rather the goal is to deliver your software every moment you can. It might be every week or every two weeks but the idea is that that you can deliver a more or less on any day or any moment of the day.
After analyzing the current monolith architecture, you need to determine how to break the system into smaller parts where each of the parts can be developed and shipped independently.
Going back a few years, SOA was the solution for this. But it turned out that the design of SOA took the dependencies between components in one form and converted them into another. The monolith had dependencies in the code, and SOA has dependencies in service boundaries defined in XML. It didn't really solve the problem and didn't result in the decoupling of the system in its true sense.
The alternative that everyone proposes now is the microservices architecture!
Be careful here because when we go to microservices as a solution, we are again exchanging one set of problems with another.
Can you move from the monolith to this, as in the picture below? If so, are you aware that you have to make a trade-off? How do we properly manage the system when the microservices architecture probably looks like this?
The answer is to choose an appropriate tradeoff.
Tradeoff and CAP Theorem
Let's take an example of Amazon. By looking at Amazon website, you can tell there are many services responsible for the data on the product details page. There is the UI, the product details, the product recommendations etc. The main product page is a composition of all the above-mentioned services.
The tradeoff is between consistency and availability. For instance, it has been designed in such a way that when they need to update something in the catalog, they can show something different in the UI. Additionally, if the recommendation engine is not up and running for 24/7, the
recommendations might not show but you can still buy stuff. The whole system has been designed around the notion of availability.
We are always trading something in the existing architecture built and designed around a set of constraints with the new architecture that is built around a 'different' set of constraints.
The constraints that we are faced with when developing a distributed system are captured by the 'CAP Theorem'. It is out of bounds for this article to cover the theorem in all its details, but here is a diagrammatic summary.
For continuous delivery, the constraint is around whether we want to ship a product many times a day over whether we want to deliver large changes infrequently.
Coupling and Cohesion
How can we decouple our systems in a way that it is easy to develop, manage and deploy?
But first, let's take a look at the coupling and cohesion that exists in a typical monolith system. It can be called logical cohesion. In this mechanism, the code is gathered together as one logical part. For example, everything that has to do with messaging, everything that has to do with data warehousing, everything that has to do with user interfaces is brought together as one component.
In microservices architecture, we strive for what is called "functional cohesion." It means that it's cohesive in a sense that all the different components belong to one single task, e.g. the product recommendation in the Amazon example above.
Vertical slicing is defined using "Autonomous Business Capabilities." When you identify these business capabilities, they allow for vertical slices instead of the horizontal slices, i.e. all components belong to a vertical slice compared to the more conventional horizontal layered architecture.
The vertical slicing even creates teams that are completely responsible for one autonomous business function and can include everything from frontend to backend and even need to do the DevOps part where they are also responsible for maintenance of the live site with the customer.
Another important aspect to keep in mind when creating the microservices architecture is coupling. With high coupling (as in a layered architecture), changes or errors in one part of the
system can have adverse effects on another part. Say for example in the layered architecture depicted above, if the database goes down, no functionality will function.
What we really want is low coupling and one of the patterns that are very useful in defining it is the "Bounded Context." In a nutshell, it is part of Domain Driven Design (DDD). DDD deals with large models by dividing them into different Bounded Contexts and being explicit about their inter-relationships. It is a large topic of its own so I recommend reading about it in detail elsewhere.
There are other ways to decouple system components using events e.g. Kafka based architecture. Each component publishes events on to a Kafka event stream. It is not responsible for what happens next. This way, the coupling is reduced.
And finally, although not directly an architecture related topic, usage of feature toggling will give great benefit in your journey towards adopting a Continuous Delivery model.
Feature toggling is very useful when you want to keep the production code very close to the development version when the business isn't ready for the feature to be enabled on live customer deployments. This allows for "calculated risks" to be taken before the full-fledged production usage.
It is important to note that after a while, managing the toggles become more and more difficult if there are too many feature toggles. It is recommended to delete them once no longer needed.
We have looked at many potential options for re-architecting your existing product to better receive the benefits of the Continuous Delivery model.
We looked at what problems monoliths bring both in terms of technical challenges and team organization issues. We then looked at breaking the monolith into Microservices, making tradeoffs as necessary, thinking about cohesion and coupling. Finally, we also looked at using feature toggles to pre-release parts of code into production.
Hopefully, that gave you just the necessary guidance and motivation to employ Continuous Delivery as a development model and Microservices as an architectural model.
Are you looking to gain experience with planning, building, monitoring, and maturing a Continuous Delivery pipeline? If you are, you should enroll in our Continous Delivery Workshop (ICP-IDO). This hands-on course was designed to help you better implement DevOps and Continous Delivery in your teams and organization. Participants who complete this course will also earn the Implementing DevOps certification from ICAgile. Learn more about this course here.
Published at DZone with permission of Deepak Karanth , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.