It kind of feel like the hype for microservices is slowly coming down to earth. Our industry is starting to realize that a system, according to the architectural paradigms behind microservices, can't easily be created by just exposing some HTTP interfaces on top of existing components. We do seem to have agreement on the necessity of having service-optimized infrastructures, cultural, and organizational changes and last but not least the outer architecture or orchestration for these architectures. The parts that many Java developers still seem to struggle with are the concrete system architecture and the fact that microservices are nothing other than distributed systems. Unfortunately, it's exactly these knowledge areas that decide the success of failure of your project. For a little bit of background, I suggest reading the wonderful InfoQ interview with Uwe and Adrian done by Daniel Bryant.
Why Microservices Again? Can't I Just Be Happy and Write EJBs and Servlets?
The key idea with microservices is supporting their independence from the rest of the application landscape and quick evolvability. Additionally, they should scale independently and require fewer resources than application server-based applications. In a world with constantly changing business requirements and a growing number of application clients, centralized infrastructures are getting way too expensive to operate and scale towards unpredictable load or load peaks. If we were stuck with application servers, we wouldn't have Netflix, Twitter, or Amazon. So... no. You can't just stay where you are.
Microservices Are Distributed Systems. What's so Special About Them?
The original definition of a distributed system is: "A distributed system is a model in which components located on networked computers communicate and coordinate their actions by passing messages." (Wikipedia) And this is exactly what happens in microservices-based architectures.
The individual services are deployed to cloud instances, physically running somewhere as they exchange messages. This is a big difference to how we used to build centralized applications. Instead of having a bunch of servers in our datacenter that handle all kinds of synchronization, transactions, and failover scenarios on our behalf, we now have individual services that evolve independently and aren't tied to each other. There are some fundamental challenges that are unique to distributed computing. Among them are fault tolerance, synchronization, self-healing, backpressure, network splits, and much more.
Aren't Distributed Systems What Everybody Calls Reactive Systems?
It's more complicated than that. And honestly, there is a lot going on with the word "Reactive" itself these days. To build an application or system out of individual microservices, you need to use a set of design principles to make them reactive, resilient, elastic, and message-driven. If that sounds familiar, you are probably right. That's the definition from the Reactive Manifesto.
A distributed system that implements the four traits of the Reactive Manifesto is what should be called a Reactive System. You can read more about the design principles of Reactive Microservices Systems in Jonas' book. The Lagom framework is built on those principles, but let me be clear, you don't necessarily need a specific framework or product to build these kinds of applications. Some of them just make you a lot more productive and your operations more effective. Hugh McKee has another free book on design principles for Actor-based systems.
What Are the Options to Build a Microservices-Based System?
I personally see two different trends of solving the problems related to microservices today. The first is to push the problems down to orchestration or the datacenter or cloud systems like DC/OS, OpenShift, Cloudfoundry, and the like. The second solution is to natively handle them on the application or framework level (Akka, Vert.x, et al).
One Container per Service, or Why an Anaconda Shouldn't Swallow a Horse
Let's look at the first approach in a little bit more detail. Write a microservice, package it together with the runtime in a little container, and push it to the cloud. As we're all full stack DevOps developers these days, it's easy to create the meta information needed for cloud-based runtimes. Thanks to my bootiful service, all relevant monitoring information is exposed already, and I can easily detect failing services and restart them. And this for sure works. You can even use a full blown application server like a microservice runtime. Plus, there are a lot of magic frameworks (NetflixOSS) that help with fighting the distributed systems challenges.
The drawback for me, personally, is the tight coupling with the infrastructure in this case. Your system won't be able to run on anything else but the platform of choice. Further on, they suggest that you just need to use containers to solve all problems in the microservices world. Looking back at the Reactive Manifesto, these type of systems won't help you with the requirement to use messaging between services.
Microservices Without Containers? That's Peanut Without Butter!
True. Containers do one thing very well: package the complete stack in a controllable way into a deployable unit. They are isolation mechanisms on the infrastructure level. And having a container standard might actually be a good thing. So, keep your containers. But you need more. So the key to building resilient, self-healing systems is to allow failures to be contained, refined as messages, sent to other components (that act as supervisors), and managed from a safe context outside the failed component. Here, being message-driven is the enabler: moving away from strongly coupled, brittle, deeply nested synchronous call chains that everyone learned to suffer through… or ignore. The idea is to decouple the management of failures from the call chain, freeing the client from the responsibility of handling the failures of the server. No container or orchestration tooling will help you to integrate this. You are looking at event sourcing. The design concepts for an event-driven architecture, using event sourcing, align well with microservices architecture patterns.
Reactive Programming, Systems, Streams: Isn't That All the Same?
Reactive has become an overloaded term and is now being associated with several different things to different people—in good company with words like “streaming,” “lightweight,” and “real-time.” Reactive Programming offers productivity for developers—through performance and resource efficiency—at the component level for internal logic and dataflow management. Reactive Systems offers productivity for architects and DevOps—through resilience and elasticity—at the system level for building cloud-native or other large-scale distributed systems. You should really take the time and read how Jonas Bonér and Viktor Klang explain the individual differences between them.
Where Can I Learn More About How to Design Reactive Microservices?
James Roper did a great talk at last year's Reactive Summit and took a hands-on look at the ways in which the architecture of a system, including the flow of data, the types of communication used, and the way the system is broken down into components, will need to change as you decompose a monolith into a reactive microservice based system.
I did a talk at the CJUG about CQRS for Java Developers which gives you an intro. If you have particular topics that you are interested in, please let me know in the comments.
More Reading for You
- Jonas Bonér & Viktor Klang Explain Reactive Programming vs Reactive Systems in 20 min.
- Konrad did a webinar lately about Reactive Integrations in Java 8 with Akka Streams, Alpakka, and Kafka.
- The Basics Of Reactive System Design For Traditional Java Enterprises.
- Duncan DeVore on Reactive Architecture, Design, And Programming In Less Than 12 Minutes.