Time to Step up Your Microservices
Time to Step up Your Microservices
Learn more about the monolith design and microservices.
Join the DZone community and get the full member experience.Join For Free
Despite the hype around microservices, there are high chances you have not adopted it. We know the benefits. But the barriers to getting started are just too high.
The fact is, architecting applications for scale is difficult. You have to choose the right database, the right frameworks, and in many cases, the right programming language.
Things don’t end there. We need to zero in on a cloud vendor too, right? Choose the right deployment strategy. That’s a lot of hard work.
The Monolith Design (A Lone Warrior)
Traditionally, we have been designing monoliths. Here, we have one huge codebase. Everything is built using the same language and the same framework.
This makes it super easy to build the application. Everything is just a function call away. The framework does the heavy lifting of building the APIs. We are used to the ORMs available. Development is well under our control.
Everything is awesome...until we have to scale.
Scaling a monolith is difficult. The cost we pay for ease of development is agility. Pushing out updates and new features in a monolith design is where things break. A minor bug can bring the entire app down.
Microservices (The Chaotic Team)
Microservices solve the agility-at-scale problem by breaking down the big fat monolith into smaller modular counterparts. In essence, every feature or functionality of your app now runs as a completely different microservice. Each microservice is a server on its own running in a separate process.
The most obvious benefit of microservices is that the deployment cycle of each microservice is now independent.
This means you can have a separate delivery platform set up for every microservice customized to its needs.
Another benefit is that you need not stick to a single language or framework anymore. This opens up doors to build microservices in the language best suited for the job. You can use Python for data science stuff and something like Golang or elixir for io bound workloads.
How do you make sure all microservices are working harmoniously together? Can they even find each other? What about breaking API changes?
Thinking in terms of microservices is difficult. Deciding how to break up your app is not an easy task either. It takes some experience to get this right. And if you think you have mastered this art, try debugging your microservices. It’s a nightmare.
The Work Around
A part of the debugging problem can be solved by using a powerful monitoring system. Something like Istio.io or Linkerd can help you here. They also let you set up a control plane to implement tight access controls in a zero-trust environment.
These service meshes give you great control and observability for microservices. You can now zero down on precisely those links which are failing. Other useful metrics like latency and number of requests are trackable as well.
But these service meshes need some deployment tool like kubernetes to work on.
So how many things do we need to work with now? A little less than a million.
Wait! I’m a Bit Lost Now
So am I. Let’s go back to where we started: agility at scale.
All we wanted was to break down our monolith into smaller pieces. That was it. Microservices just seem to make that dream a reality.
What if we had a way to get the agility of microservices without having to manage the chaos that comes with it? Can we make things easier?
To make it easy to follow, I’ll list out the features we need.
- An easier way to break down the monolith. Maybe start with one and break it down when needed.
- Everything should be just a function call away. Nobody wants to waste time on getting the networking right.
- Ability to deploy each piece independently. Potentially have them in different languages binded together with a consistent API.
The Functions Mesh (Organized Chaos)
What if we could break down each operation in our services into functions? No, I’m not talking about a function as a service (like AWS Lambda). We are still talking microservices.
No matter which language or framework you use, each endpoint is handled by a function right? So why not expose these functionalities directly as functions!?
What Does All of This Mean?
You don’t need to trigger HTTP requests to consume the APIs exposed on your microservices. All you need to do is call a function on the microservice directly from your code. All networking like load balancing and service discovery is completely taken care of.
Since you are exposing functionality via exposing functions, you can start with a monolith and eventually break things down easily. It's as easy as moving the functions from one directory to another.
Sounds incredible right?
This is what we call a functions mesh. All of your functionality, local or remote, is just a function call away now.
Microservices aren’t going away anytime soon. There are many use cases where a FaaS setup just cannot tackle the same problem as microservices.
This has left a huge scope of improvement. Especially around the way microservices communicate. We believe that the experience of building a microservices-based architecture should be no different than going the traditional way.
The functions mesh does seem to solve many operational problems around microservices. It combines the ease of a FaaS along with the flexibility and environment control of a microservice.
Published at DZone with permission of Noorain Panjwani , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.