# The Maths of Complexity

### Why Microservices reduce the complexity of our software systems and how far we can push them.

Join the DZone community and get the full member experience.

Join For FreeWe all know that creating big complicated monoliths is a bad idea, and creating microservices is the way to go, but why? Why are monoliths 'bad'? Why are microservices 'better'? Is there an optimum number of services that we should aim for when creating a system? This article takes a mathematical journey into complexity to uncover why microservices make sense, why monoliths are the great entropy monsters we think they are and what happens if we try to minimise complexity in our applications.

Microservices are a great way to structure software — they support Domain-Driven Design and allow us to align our deployment model with our delivered business value. Large monoliths, on the other hand, can be very complex, with a high degree of coupling between the parts. Sometimes we find it can be better to split them apart, creating subsystems, or, if we continue down this path -- microservices. These smaller parts are easier to manage, fix and deploy. Practically speaking the smaller parts seem less complex overall and are less difficult to handle.

Let's try to turn these imprecise statements into something more concrete. Is there a way in which we might compute the complexity of our applications, and can we see what happens to that complexity 'value' when we divide a monolithic application up into smaller parts?

## The Maths Section

Let us start by considering a monolith which is comprised of 15 parts. These parts might be classes or components or services or files. Somehow or other we can describe the monolith as being comprised of these parts. The figure below shows a hypothetical monolith comprised of 15 subsystems. It's not a real application so don't pay too much attention to the details of the parts or their connections.

Figure 1. A monolithic application comprised of 15 parts.

If we want to consider the complexity of the monolith we need to consider how complex each of the individual parts is. We might write that the complexity of a part is

It doesn't matter too much what that complexity actually is. It might be the number of lines of code, it might relate to the cyclomatic complexity of the worst algorithm. It might be a measure of the number of classes and their inter-relationships. Let's just say that somehow we come up with a means to measure the complexity of a part and this is our label for it.

What then is the complexity of the monolith? We might imagine that it is the complexity of all the parts summed together:

But -- there is more to it than this. The parts don't exist in isolation - they interact with one another, and they are to some degree coupled with one another. (This is a monolith after all!). That interaction also carries a level of complexity. We might label that as:

This expression is for the complexity caused by the interaction between part 1 and part 2. Again, the details of the complexity are of less concern. In this case, it might relate to the number of methods called in the Shipping Scheduler from the Packaging Manager (if these are parts 1 and 2), or it might include the number of parameters in the methods. It may relate to the number of ways in which the methods can be called or the nature of the protocol used if the parts are more service-like. Somehow or other there is complexity and we want to capture it.

In general, we might say that the complexity due to all the interactions between the parts in the monolith can be written as:

In reality, we might not need to consider that part 1 interacting with part 2 has a different complexity than part 2 interacting with part 1 so we could remove the duplication. In a real system, we might find many parts are independent of each other and in that case, the complexity for those particular interactions would be zero.

In a real monolith, there may be 'secondary effects' also; if the complexity of the interaction between part 1 and part 2 depends on, or is related to the complexity of how part 2 interacts with part 3 then there may be other aspects of the system complexity which we have not covered. We'll leave these aside for now and stick with our naive interpretation. The complexity of the monolith then will be the sum of the individual part complexities plus the sum of the interaction complexities:

This is our overall measure of system complexity.

Now, what happens to this complexity if we divide the monolith up into subsystems? (This might be the first step on the journey to a fully microservice-based architecture). In dividing the monolith we look for collections of parts that are more closely coupled to one another than they are to the rest. In a monolith with high coupling this might not be an easy task but imagine that we can split the example monolithic application above into 3 subsystems. We still have the same number of parts but we will organise these into distinct subsystems. This is shown in the figure below.

Figure 2. Splitting the monolithic application into 3 distinct subsystems.

Now - can we compute the complexity of this new set of systems and compare it with the original monolith? Surely. If we consider the complexity of a single subsystem, as before, it is the complexity inherent in each part plus the complexity from the interactions between the parts (if there are P parts in the subsystem):

exactly as before. We can repeat this for each of the three subsystems. To compute the overall complexity we need to take a step back and consider the interactions between the subsystems also. We have already computed their individual internal complexity but now we need to include the interactions. This can be achieved in the same way we did it earlier for the parts:

(If we use 'ss' as a shorthand notation for subsystem). Therefore to compute the overall complexity of the 3 subsystems together we sum the complexities of each subsystem and the interactions between them.

To see the benefit of splitting the original monolith, all we need to do is compare the complexity before and after the. But first, let's think things through a little. Before the splitting of the system into subsystems, each part interacted with many other parts. After the split, some of these parts will exist within the same subsystem and can be directly interacted with, and some of them will be in the other subsystems and can only be interacted with via the interfaces between subsystems. The communications between parts therefore will need to be modified in the new system. As each part is no longer free to communicate directly with each other part there will be an additional complexity overhead in comparison to what we had originally. We might guess that this scales roughly as the square of the number of subsystems. (If we consider that each subsystem needs to marshal messages to and from each other subsystem). This can form part of the 'I' or interaction term we have above; we just need to remember that this value should take this additional processing into account.

## The Results Section

Okay. Enough of the maths then, let's turn this very abstract discussion into something more tangible. Let's get some numbers out of all this. We'll need to make some broad statements here. In reality, we could measure our software beforehand to establish how coupled the various parts were. We would need to establish a robust measure for the complexity of the various parts. (Or at least for the interactions between parts, as we'll see).

Let's say then, for now, that each part in our original monolith interacts with every other part, and that the complexity of the interactions is all the same. That will be a gross simplification of any real system, of course, but should suffice for demonstration purposes. The complexity of each individual part we can ignore; each part will have more or less the same complexity before and after the split, so if we are only considering the change in complexity these will cancel out and we can ignore them. To put this another way - the complexity change between the monolith and the collection of subsystems is primarily due to the interactions between the parts rather than the intrinsic complexity of each part.

If we assign a number to the complexity of the interactions between parts as '1', and the complexity of the interactions between subsystems as '2' , then we can plug these numbers into the above equations and we find, for the monolith:

and the complexity of the split system will be

so the *difference* in complexity between the two systems is 225 - 95 = 130. We can see that reshaping a single monolith into 3 subsystems reduces the complexity by 130 'points'. What those points mean in reality depends on how we assign complexity scores to the interactions. It gives us, however, a means to measure the change. We can see that the 3 subsystem approach is simpler, less complex and we can, of course, compare this figure to what we obtain with different subsystem designs.

For example, if we increase the number of subsystems to 4 instead of 3 we find that the complexity is now,

, and we can see that it is further reduced with 4 subsystems. If we call the '89' in this last expression the 'Extra Complexity' above and beyond the complexity from the internals of the parts themselves, then we can plot how this value changes with the number of subsystems we break our monolith into. This is shown below.

Figure 3. The extra complexity in our application as we vary the number of subsystems

There are several quite interesting things that appear when we look at the results like this:

- if we break a monolith into subsystems, in general, we will see a marked decrease in the overall complexity.
- the greatest benefit we obtain occurs when we break a monolith down into a small number of subsystems. As we increase the number of subsystems then the benefit gets less and less
- there comes a point at which the optimal number of subsystems is reached. In the figure above this lies at around 4 subsystems
- if we increase the number of subsystems beyond this the overall system becomes more and more complex until eventually, we'll end up back where we were to start with, or even worse

Now, this example was fairly contrived; we assumed extreme levels of coupling between parts that probably wouldn't exist in a real system, and we were very superficial in our assessment of the complexity of the interactions between parts and systems. Nevertheless, I think we can see with some clarity why breaking a monolith down into subsystems, or eventually microservices, leads to a reduction in the overall complexity of the system, and that in turn gives us a better, more maintainable system architecture. Indeed if we consider a far more complex system comprised of 100 parts and repeat the exercise we get the graph shown below, with complexity minimised with a division into 15 subsystems.

Figure 4. The complexity of a 100-part monolith as we divide it into subsystems

"Complexity is the enemy of quality", they say, and we will gain a lot in this regard by working to reduce our monolithic applications to a smaller set of independent subsystems or services. But there are limits to this and we need to bear in mind that the aim is to reduce the complexity overall, and so we should steer clear of splitting the parts purely for the sake of it.

Opinions expressed by DZone contributors are their own.

Comments