When growing an enterprise-scale system, at some point teams will look to consider transitioning to a microservices architecture. Microservices can bring increased agility, productivity, and confidence to teams that need to scale a system as usage grows. There are also some fallacies to consider.
This happened to us at Cloud Elements and we decided to start transitioning parts of our platform to microservices a little over a year ago. We continue the path of distributing out more independent functions from our current monolithic platform named internally as Soba (because they are delicious) to microservices. We are taking the somewhat common approach of distributing compartmentalized pieces of our platform to run as microservices while continuing to run the current core monolithic platform together. The end goal is to have Soba exist to support our core platform API and Elements, while all other functions scale with our customers as microservices.
Functions transitioning out of Soba:
- Polling Framework.
- Script execution (named Grover).
- API Gateway.
Some Themes Emerge
As the services become more scalable, there are some themes that have emerged to inform how we should treat the next service that hits production.
- Aggregating Logging
“A replicated log is often about auditing, or recovery: having a central point of truth for decisions. Sometimes a replicated log is about building a pipeline with fan-in (aggregating data), or fan-out (broadcasting data), but always building a system where data flows in one direction.” - Tef @ Programming Is Terrible.
To provide the best support for our customers and our team we need to have a complete snapshot of what’s happening across the platform as they behave independently, in one direction.
- Containers for Aggregated Sandboxing
We use containers for serverless functions and customer-generated functions that can scale independently of each other. It also gives us more intelligent sandboxing, so we are not getting cross-talk of customer functions, and avoids the "noisy neighbor" problem.
- API Gateways
As each of these scale, we have also extended the API Gateway to prevent large DDoS-like payloads from hitting the system, improving security while allowing for scale in parallel.
Benefits We Are Seeing
We are seeing some performance gains, particularly in dropping the latency of customer related function. While it’s still early to give exact numbers, the benefits have also gone beyond the performance metrics. It’s easier to field operations because there is more predictability in the services, as Soba will be performing fewer, different operations.
Where We Are Now
The current challenge we are seeing is how to develop and run these disparate systems either locally or as remote service, while still keeping them in sync and allowing for dependencies. There does not yet seem to be an industry standard for how to best approach this, so it has taken some trial and error to find a process that works for us. We are a little different in that our entire platform is 100% RESTful API based.
To iterate through this, we are evaluating different approaches and tools that would allow us to better manage our collection of microservices. We are looking at the different paths of
- Running all of our microservices locally in containers.
- Running a few services locally and the rest in remotely in the cloud.
Tools we are playing with for running services locally:
- Minikube - Local single node Kubernetes cluster inside a VM.
- Docker - Contain all the whales! It does solve the “works on my machine” problems when collaborating remotely.
Tools we are playing with for running remotely:
- Now- lightweight CLI for deploying Node.js or Docker applications. In a one-line deploy with command: now it will spin up an instance on its own URL, custom URL for paid accounts.
Other tools and npm packages we use, all our microservices are Node.js:
Every Microservices Architecture Is Different
That's because different customer needs make it different. When looking at serverless vendors, we have customers that run functions longer than the five-minute Lamba execution limit. To get around this and still provide the best customer experience we can, we ended up building our own function as a service platform to scale as needed, for as long as needed. In conclusion, there is not yet a typical process for deploying microservices. For our team, this has meant testing different setup options until something feels right for our customers, team, and processes.
More resources on microservices: