Deep Systems and Microservices
Read on!
Join the DZone community and get the full member experience.
Join For FreeI recently had the chance to sit down with Ben Sigelman, CEO, and co-founder of LightStep. Clad in our Halloween costumes, we hunkered down in our offices on opposite coasts and talked microservices and deep systems. Frankly, I had not heard of deep systems before talking with Ben but left the video call with a .txt file full of information imparted by a leader in the field.
You may also like: Building a Recommendation System Using Deep Learning Models
Microservices: The Good and the Bad
Microservices have helped revolutionize the software industry. These modular bundles of code allow developers to work in a far more agile fashion, thus increasing the speed with which organizations can deploy functionalities and fix errors in their applications. But, while there's plenty of good that comes with microservices, there's also plenty of bad.
As more and more services get added to an application codebase, it becomes increasingly hard to test your services. Why? Well, there's typically a team of 10-12 developers working on anyone microservice, but the microservice they're working on is dependent upon one, if not more, other more microservices to function properly.
As these systems grow, the connections between microservices become more complex and the layers of services that make up the application become deeper. This makes it increasingly difficult to gain visibility into your application and to get a clear visualization of the aggregate results of all these services.
Deep Systems
According to Ben, deep systems occur when an application or software is comprised of many layers of services. To be a little more precise, if your application uses ≤ 30-40 services, then you're dealing with deep systems, as you now have to try and traverse three or four (or more!) layers of services.
With the advent and increasing popularity of microservices over the past several years, systems have become increasingly deep. Development teams, thus, more regularly come up against the issues presented above.
Think about this. Your team has developed a great microservice, but when you go to test it, it doesn't perform the way it's supposed to, despite all the code is correct. In deep systems, your team's microservice likely depends on all the other (and the could be hundreds, if not thousands) of microservices your organization uses working correctly.
As Ben put it, "the symptoms can be very far from the root cause." This system has devs pulling out their hair, stressing over code and business logic that is completely out of their control.
One way Ben suggested dealing with this common issue in deep systems is to slice into the data you get from testing to figure out where the symptoms lead you. While, twenty years ago, this was as simple as running a debugging, in modern dev and production environments, teams need to use distributed tracing.
With distributed tracing, testers can track metadata as it travels through the systems under test, allowing them to pinpoint which services that metadata touched. Ben was also kind enough to put this into layman's terms for me: We can’t know if a microservice is good or bad without context and distributed tracing is the way to find this context in deep systems.
Best Practices Around Deep Systems
Ben noted two things that dev teams should begin doing.
- The platform team must set up a few basic rules: establish SLOs (service level objectives) for every service in a deep system — probably only two or three things anyone service provides that people care about. SLOs can’t be perfect or the release will never happen. Thus, establishing what a service’s consumer cares about is the most important step. SLOs allows you to tell what your tools’ goals are and how the tooling can understand the aggregate system and single services.
- You need good telemetry. You can’t always simulate real-world scenarios in pre-production effectively, so you need to see how systems function in production. Sometimes telemetry is added after the fact, but this will only hurt your efforts because you won't be able to catch errors before they occur.
Ben then went on to state that, if everyone writing software implemented these strategies, software quality would 100X immediately.
Further Reading
Computer Vision Systems Applied to Real Business Problems
Deep Reinforced Learning: Addressing Complex Enterprise Challenges
Opinions expressed by DZone contributors are their own.
Comments