A CxO Guide to Microservices: Do Not Spread Manure on Your Dairy Cows
A CxO Guide to Microservices: Do Not Spread Manure on Your Dairy Cows
This guide to this revolutionary architecture is geared to C-suite leadership, to show the big picture of microservices for agile development.
Join the DZone community and get the full member experience.Join For Free
Deploy commerce faster and keep pace with the demands of your customers and executives. Read this blueprint to learn how to create your own microservices-based commerce foundation so you can quickly move onto building innovative and unique shopping experiences for your customers.
This architectural style took the world by storm as soon as cloud computing made all those ideas actually feasible and usable in the real virtual world, contributing to the rise, expansion, and damnation of several corporations (see Netflix, Blockbuster, Spotify, Disney), fundamentally changing the way we looked at and worked in IT in companies of any size.
Of course, there are other factors that have been crucial in these transformations, as somebody may point out…
However, it is undeniable that, if done right, microservices enable businesses to reach a superior level of agility that can be decisive in today's market.
I hear you already: "Agility, agile, here we go again! What does that even mean?"
I know! You are totally right! Yet another word that has been used in a million different contexts. Let's define it then!
Quoting Gartner, agility is:
"The ability of an organization to sense environmental change and respond efficiently and effectively to that change."
And McKinsey agrees with that, quoting them:
"The ability of an organization to renew itself, adapt, change quickly, and succeed in a rapidly changing, ambiguous, turbulent environment."
In other words, it is what evolution is all about. Remember Darwin?
"It is not the strongest species that survive, nor the most intelligent, but the ones most responsive to change."
That is why microservices are also referred to as an example of Evolutionary Architecture.
However, as voiced by many professionals, microservice architecture has specific drawbacks that need to be addressed. To understand this fully let's build up a quick framework we can use to compare Microservices with the old good monolith approach. The idea is simple. Let's define a specific set of properties we expect our product to have and see how the two architectures score.
The properties are:
- Agility: The overall metric, we have defined this bit already. How fast can we get to production? How quick can we deliver a fix or a new functionality? It is all about reducing time to market while having great quality, a small amount of bugs, low criticality (or not at all) security issues, and so on.
- Deployability: How easy is it to deploy resources and functionalities for a team? How often can the team deploy? Are large coordination tasks needed, as for instance cross teams regression tests? How much of all this can be automated?
- Testability: How easy is it to test the software artifacts? Can the different parts of the system be tested in isolation? Can tests of different parts of the system be decentralized and run in parallel? Can testing be automated, and if yes to what degree?
- Performance: In this context, we focus mostly on response time, namely how fast can the system answer back to our customers' requests.
- Simplicity: From an overall perspective, how complex are these systems? How many moving parts do we have to take care of?
- Scalability: This is the ability of the system to cope with a continuously changing amount of work. The system should be able to scale up or scale out to cope with the growing load. But it should also be able to scale down or scale in to release resources that are not currently needed.
- Reliability: How much do we trust the system to work properly in a specified environment and for a given amount of time, based on empirical experiments and tests.
Before we continue let me state the obvious: in the following graphs higher means better :)
As you can see the monolith score greatly on:
- Performance: In a monolith architecture usually all the code runs in a single process (from this the name "monolith"). This means that dependences to external systems, network communication and other expensive constructs are all kept to the very needed minimum, maximising performance.
- Simplicity: As said above we are usually speaking of mono-process applications, on which implementation of transactions, rollback mechanisms, error handling, tracing, and so on is quite simple and it has been done for years. There are tens of battle tested libraries for each of these areas that can be used in order to literally "stand on the shoulders of giants". However, I must say I have definitely seen very complex monolith, we can always make something more complicated than it ought to be.
- Reliability: Monolith applications are known to just work once they are deployed because of their usually low amount of external run-time dependencies, network communication, and so on. Indeed if something goes wrong with a monolithic applications, usually people will ask: "What has been changed recently?".
However it does not do so well on the following properties:
- Deployment: Deploying large monolithic applications is a well-known huge pain. You need to coordinate what should get into a release, and what instead is not ready; you need to go through several stages of testing involving more and more teams, you need to get through all sort of bureaucracy, Change Records, Change Advisory Boards (CABs), Change Management Teams, etc. in order to ensure all is really ready and that everybody really mean to change something, and then BAM! Big Bang Deployment, all at once. You do this 4 times a year because it is just a very expensive and complicated process.
- Testability: In a monolith it is so easy to optimize! You can jump straight into somebody else's code or somebody else's tables so that we do not have to go through somebody else's slow controls and isolation APIs… To test isolated components in such monolith systems is quite difficult. Usually the older the monoliths are, the more development has been performed on them, the more the soft division and APIs in the code are blurred and bypassed. This also happens a lot when company reorganization occurs, as also beatifully explained by the Conway's law:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure.
- Scalability: There is only one way to scale out a monolith, and that is by duplication. This is complicated, brings a huge amount of issues with it when analysing also the data side, and does not really solve the issue (it is just not granular enough).
- Agility: The overall Agility of course is really impacted. Businesses release quaterly, if lucky, the final user does not experience the feature or service gradually and cannot give back early feedback resulting in all sort of things we continuously experience in our worse nightmares.
I hear you again: "But then how is it possible that all software companies have been having such a great success out there before the advent of cloud computing and microservices?" Well, that's a great question! The way I see it there are two reasons:
- Expectations in the market have changed, that is quite clear to see for everyone I guess. If 5 years ago it was OK to have a release every 6 or 3 months now that approach does not cut it any more. Customers' needs have become so variegated and elaborated that is impossible to even think to get the solution right at first release. And if the second release is in 3 months from now, well you can be sure that your competition will get to your customers before that.
- Software companies have created the tooling needed (software but also processes, as the CAB for instance) in order to partially solve the issues in those areas where this approach is more suffering.
These tools and processes have definitely made it better, as you can also see from the following chart:
Just to give some practical examples of what I mean, let's analyze the deployment area. Almost each single company has developed their own or bought some tools with the objective of improving and speed up the whole change process or some stages of it, just to mention some Commercial off-the-shelf (COTS) products: Microsoft Release Management and TFS, Octopus Deploy, Team City, the Atlassian suite, and so on. Moreover, process-wise, this has historically been heavily centralized and bureaucratized in order to ensure resiliency during and after any change. Just think that ITIL has a full volume just on this, called Service Transition. The same importance is given in other frameworks as ASL2, COBIT, and so on.
To improve testability, mocking frameworks have been widely used in order to improve isolation, and development processes as Test Driven Development (TDD) are now being adopted all the way to container-based development because of its great benefits.
As Sam Newman puts it:
"Microservices are small, autonomous services that work together."
Cool, so we are speaking about a distributed system made of small services… but how small should a microservice be? There are many ways to answer this question, one more abstract than the next. The one rule I think we should definitely follow is the application of the Single Responsibility Principle from Robert C. Martin:
"Gather together those things that change for the same reason, and separate those things that change for different reasons."
Just by following this principle, we define the size of a microservice based on the type and complexity of the system at hand, which, from my point of view, makes a lot of sense, and at the same time ensures that a fundamental property of microservices is respected: each microservice must be independently deployable. In this case, this will always be true since all the tightly-coupled code that changes together is actually grouped in the same microservice (or, it should be).
We have just scratched the surface here about microservices. People have written books about it, so if you are interested more in the architecture, I advise you to read further — it is extremely interesting!
Anyway, let's move on to see how our microservice architecture scores.
As you can see, that's almost the opposite situation of the monolith. Since this architecture is focused on maximizing agility, it does score high on the following properties:
- Deployment: Microservices are simple and independently deployable units. This means that, in general, deployment of each single component is much easier just out-of-the-box. However, what we do not get out-of-the-box is the environment to exploit these properties. In order to allow teams to deliver at unprecedented speed, a set of processes and practices known as Continuous Delivery have been defined. This does not mean that everything is deployed into production every time something gets changed. It means that every change is proven to be deployable at any point in time. Tools like Jenkins, Go.CD, and many others incorporate these ideas and offer a solid base for development teams to achieve maximum agility.
- Testability: Maintaining quality is crucial for any business, and to support this development speed, the job gets infinitely harder. That is exactly where microservices excel. Since the size of each service should be smaller than the full monolith, to fully test and achieve very high coverage with automated testing is very feasible. It is in this scenario that the practice of Continuous Integration ensures that a very high quality is maintained in our deliverables at any point in time, through automated testing the deployment pipelines. However, there are some other serious issues that need to be addressed, because it is true that microservices are independent, but they do cooperate. A bad change can cause what it is known as a Cascading Failure situation. Contract-Based testing is an example of how to catch these situations early in the development process and avoid a failure in production.
- Scalability: The fact that systems are divided into smaller independent components, the microservices, implies that these components can scale completely independently from one another. So if, for instance, your new application goes viral over social media, your engineers will have to scale up/out the microservices used by the signup flow and not everything else. If you want to reach awesomeness, then you can even remove the human factor through tools that allow business metrics and analytics data from the production environment to actually trigger scaling operations automatically, without having any human intervention.
- Agility: I don't think any further comment needs to be added here. Agility is thereby maximized.
Hover, this approach has some serious pains, namely
- Performance: It is in this area that microservice architecture suffers the most. Since the completion of most business operations now may involve several services that need to communicate over the network, performance may be seriously impacted. However, it is not as bad as you may imagine. I have worked on large microservice systems with hundreds of different microservices (reaching almost 1,000 services running in production when completely scaled out) and the average response time was always under 200ms. This was thanks to several reasons, among which, the most important are
- The adoption of simple light-weight protocols and middleware pushing for smart endpoints and dumb pipes. This means avoid the usage of huge and complex black-box ESBs and opt instead for simpler approaches as RabbitMQ, ZeroMQ, or sometimes plain Webhooks over HTTPS work just as well.
- Push for implementing asynchronous business operations whenever it is possible and makes sense. Sometimes, answering very fast to your customer saying that the operation requested is currently being performed may be a much better user experience than just showing a spinner or failing for a timeout.
- Simplicity: Distributed systems are not simple, and microservices are no exception. If not correctly addressed, this architecture will definitely bring huge pains. As I said above, the number of microservices in a real production application can and will be in the order of thousands of instances. The task to discover whether there is an issue or not on the system is absolutely not trivial anymore. It is just not humanly possible to manually analyze the logs from all these services. I advise you to tackle this from the very beginning and think about solutions to centralize logging and monitoring so that your operation teams and feature teams can have an overview of the systems they are responsible for. Moreover, you can always think to remove as much human intervention as possible by adding a layer of Artificial Intelligence for log analysis, alerts, and maybe even automatic recovery in some cases.
- Reliability: Exploiting the independence of microservices, we can improve system reliability just by adopting some specific deployment processes. Just to mention some examples, Canary Releases, Rolling Deployments/Updates and Blue/Green Deployments are some of the patterns that I really advise you to look into because they can seriously lower the risk related to changes. Of course, no matter how much we try to avoid breaking the system, in some cases, it will happen, maybe even because of factors that are independent of us. There are several constructs and patterns that have been implemented successfully in many libraries and frameworks for microservice development that improve system reliability. Circuit Breaker, Supervisor, and Message Flow Monitor are examples that feature teams and architects should definitely look at.
Some part of your systems or applications will surely need to excel in performance or reliability and be fully optimized in those areas, because, for your business, they are as important as life support systems or nuclear power control software.
For all the rest, go for microservices. Go for it as early as possible and, please, do not start with a monolith.
I hope that by the time you reached this section, you agree with me that developing on a microservice system and developing on a monolith are quite different. Some of the tools, approaches, and processes are generically improving the development processes on both architectures; just to mention some examples: TDD, the practice of containerizing your application, and so on. However, others are specific for one architecture. It is extremely dangerous to adopt CI/CD practices for large monolithic applications, as it is completely useless and even harmful in some cases to force centralized Release Management practices on the delivery process for a microservice-based application.
If you allow me the use of a metaphor, it is like milking cows and growing grains. They are part of the same world, farming, but they differ quite a lot. You can invest in improving the quality of the water that is used in your farm. This helps both your dairy cattle and your grains. If you spread manure at the right time of the year on a field of wheat you can maximize your harvest and get rid of your manure in one go.
Do not spread manure on your dairy cows. It just does not help.
Published at DZone with permission of Angelo Agatino Nicolosi . See the original article here.
Opinions expressed by DZone contributors are their own.