Experience at Twitter Improves Runtime Between Microservices
Given an orchestrated, containerized, microserviced system, what is needed to make the system reliable? Here's how Bouyant looks at modern architectures.
Join the DZone community and get the full member experience.
Join For FreeWilliam Morgan, Founder and CEO of Buoyant, and former engineer and engineering manager at Twitter, shared his thoughts on the current state of containers.
Q: How is your company involved in the orchestration and deployment of containers?
Our open source project, Linkerd, tackles some of the most fundamental operational challenges involved in running cloud-native software — the runtime communication between microservices. If anything, we focus beyond orchestration: given an orchestrated, containerized, microserviced system, what is needed to actually make the system reliable.
It turns out it’s a lot. Containers and orchestrators solve one big piece of the puzzle, but they don’t solve all of it. Linkerd is built on the same concepts and code used at Twitter during its move to a cloud-native application. Linkerd tackles the service-to-service communication code — things like service discovery, load balancing, retries, timeouts, and TLS — and moves it outside of the application and into a dedicated layer, where it can be monitored, managed, and controlled.
Q: What do you see as the most important elements of orchestrating and deploying containers?
Our experience at Twitter taught us that, beyond the tech, the most important aspect is that they enable you to think at a better layer of abstraction. Containers abstract away the whole packaging of dependencies for runtime. Orchestrators abstract away the whole underlying hardware pool. And when we build our applications as containerized microservices running with an orchestrator, we’re largely freed from worrying about those things. We’re not thinking about things like individual machines and IP addresses and TCP/IP connections, now we're thinking about services, and calls, and requests. And that’s a very positive change, because those things are much closer to what you actually care about.
Q: Which programming languages, frameworks, and tools do you, or your company use, to orchestrate and deploy containers?
I think one of the most transformative aspects of the move to containers is that you don’t actually have to care about what languages or frameworks we use. We provide Linkerd in a container, and you co-deploy it alongside app code as a service mesh, and you treat the result as an operational tool, without having to look under the hood. Docker gives us that power.
In fact, one of the most powerful things that’s really driving Linkerd adoption is that companies are running big polyglot systems, because Docker and Kubernetes make it so easy. And then they have no recourse but to look for a polyglot solution like Linkerd to manage their runtime. In fact, beyond polyglot, Linkerd allows you to be “poly-system”---to introduce Kubernetes incrementally into your existing stack, without having to migrate everything at once. That’s one of the big use cases that we’re excited about.
But if you do want to look under the hood (and it’s an open source project, so you can!), we use Scala and Rust extensively, and our surrounding tooling is usually built in Go. Scala we inherit from our Twitter days, and much of our functionality is based on Finagle, Twitter’s library for service-to-service communication. It’s an incredibly well-tested library and it powers sites like Soundcloud and Pinterest in addition to Twitter itself. And Rust is an amazing language that lets you build very high-performance native code network proxies while still being memory safe, which is a pretty big deal in the day and age of things like Cloudbleed.
Q: How has the orchestration and deployment of containers changed application development?
One of the biggest changes is that they dramatically lower the cost of running microservices. Microservices are a fundamentally good idea. Break your big complicated thing down into independent components! Great! That’s like rule #1 of software engineering. But until we had things like containers and orchestrators, the cost to adopting microservices was huge. Twitter spent five years of intensive infrastructure engineering to do this. It was successful, and it was necessary, but it was insanely expensive. That wasn’t possible for most companies.
So now we can use microservices without paying the iron price. Again, great! But with microservices, of course, now we’ve introduced runtime network communication between services. And this is where things get complicated and where a service mesh, suddenly becomes hugely valuable. You need a way of understanding this communication and monitoring it and controlling it at runtime.
Q: What are the most common issues you see affecting the orchestration and deployment of containers?
It’s not enough to slap something into Docker and Kubernetes and call it a day. Like security, reliability comes in layers. Containers and orchestrators handle one big significant part of application reliability, but they’re not all of it.
If you’re running microservices, even at small scale, you’re running a distributed system. Congratulations, you’ve introduced whole new and exciting ways of failing that you’ve never seen before. One tiny thing goes wrong in one service somewhere in the stack, and because of how communication is handled it cascades until it takes down the entire data center. That’s not a theoretical failure, that’s something we saw again and again at Twitter. Once we introduced a service mesh, we started being able to dig ourselves out of that hole.
Q: What’s your outlook for the adoption of containers and microservices moving forward, and where are we now?
My outlook is very rosy. This is a fundamentally positive in the state of software infrastructure from where we were just a few years ago. And I think it’s just a matter of time before this is the default way for building new software. We see tons of companies adopting Linkerd and it’s almost always as part of introducing things like Docker and Kubernetes and Mesos into their existing stack. We see this all across the board, from startups to big companies. The service mesh model is starting to make sense to them, and they’re using it alongside containers and orchestrators and building into this new stack in a very fundamental way.
Opinions expressed by DZone contributors are their own.
Trending
-
How Web3 Is Driving Social and Financial Empowerment
-
Top Six React Development Tools
-
Writing a Vector Database in a Week in Rust
-
Building the World's Most Resilient To-Do List Application With Node.js, K8s, and Distributed SQL
Comments