Performance Issue Considerations for Microservices APIs
Performance Issue Considerations for Microservices APIs
Join the DZone community and get the full member experience.Join For Free
Sensu is an open source monitoring event pipeline. Try it today.
[This article was written by John Mueller]
It isn’t possible to create any sort of application that doesn’t experience performance issues at some point. Performance actually takes in a lot of territory, but most developers worry about reliability and speed the most. An application that works fast, but doesn’t perform reliably isn’t much good—it’s akin to getting nowhere fast. Likewise, reliability at the cost of speed will have the support people overwhelmed in no time. You need access to application resources that are both reliable and fast no matter what technology you use and this includes microservices.
You might think that you can simply use the same techniques you’ve always used to address speed and reliability problems. Yes, some of these techniques do work with microservices, but there are some special twists to consider when working with microservices as well. The purpose of this article is to explore some of the performance issues you can encounter when working with microservices.
Considering the Single Microservice
The best place to start thinking about microservice performance is a single microservice, even though your application will likely use more than just one. It turns out that the best way to keep performance issues under control is to track the microservices individually, rather than as a group. However, you don’t want to use a different monitoring product for each microservice. Rely on a single product to monitor each microservice individually, such asGraphite or StatsD.
The use of microservices also implies a distributed system. Where you used to have a simple method call in a monolithic application, you now have many remote procedure calls (RPCs) to deal with. This means having to deal with issues such as network latency, fault tolerance, message serialization, unreliable networks, and varying loads within the application tiers. As a result, you have more items to test (any of which could cause a performance issue) even when working with a single microservice.
Working with Multiple Microservices
Microservices are popular for a number of reasons. Of course, you get to use the best language for a particular task and create more focused code that can serve a number of applications. However, each microservice consumes resources and when you start adding these resources together it becomes evident that the housekeeping burden on servers that support microservices is much larger than it would be with a monolithic application. Each microservice represents a separate server process. In addition, you must add in additional processes for load balancers, failover software, and communication between each microservice. In short, there is no free lunch when it comes to microservices. You gain flexibility, reliability, and speed, but at the cost of increased resource usage and the need to deploy additional servers. If you create an application dependent on microservices and don’t account for these other requirements, you’ll find that your application runs slowly. Therefore, the first significant performance issue of using multiple microservices is that you must provide additional resources to run the application.
To help keep some of the housekeeping problems of working with microservices under control, you can enforce a certain level of standardization. For example, you might specify that all of the microservices rely on REST API calls, rather than allowing a large number of call types. The problem with this approach is that once you start standardizing, you also start losing some of the benefits of the microservice approach. The tradeoff is always going to be between flexibility and performance. The more flexible your microservice architecture, the higher the performance costs.
It’s important to understand that many microservices exist in the cloud and that an issue that affects one microservice might not affect any of the others. Something as simple as a routing issue could cause problems with just the microservice using that route. Other microservices may use other routes that are currently running at full speed.
Logging is part of the solution for performance problems because you can see precisely how the microservice is acting. One of the problems that people face in working with microservices is that each one has its own log. This is a problem because you can spend hours trying to find the log containing the information you need. Just knowing that there is a problem is an essential first step to fixing it. The best approach is to rely on log aggregation so that you can look in just one place for all the log information for all of the microservices you use. Even though you monitor each microservice separately, you must centralize the monitoring and logging process in order to find problems quickly. Products such as logstash help you perform the log aggregation. You often need to couple these products with another product, such as Kibana, to perform intelligent searches based on data science principles.
Discovering Who to Blame
As previously mentioned, it’s essential to monitor each microservice individually. Unless you do, you’ll never figure out which microservices are apparently causing a problem. However, separate monitoring is just a beginning of a process, and not the end. Microservices are interconnected. If microservice A is having a performance problem and microservice B relies on microservice A support, then it could appear to you that both microservices are having a performance problem when only microservice A is to blame.
Part of your monitoring setup must include the dependencies for each of the microservices you use so that you can determine when a problem is the result of a supporting microservice, rather than the microservice you’re actually monitoring. Using deployment products such as Jenkins, UDeploy,Capistrano, Chef, or Puppet can help you map out dependencies to make determining the source of a problem much easier.
It’s also important to realize that microservice interactions may exist outside your application. The dependency tree may exist far outside the list of microservices that you know your application uses. In order to resolve performance issues, you must know about these outside dependencies or at least consider them when troubleshooting a problem. Remember that these outside dependencies not only affect speed, but also reliability and security.
There is also the potential for hidden issues in the microservice arena. For example, the use of microservices means the proliferation of interfaces—a kind of contract between two parties. When one microservice changes an interface, other microservices using it may not be aware of that change, which could cause a failure of the entire system, even though the microservice that actually caused the problem is running just fine. In short, you need to ensure that things like interfaces remain the same and check for them as a potential source of problems when you start to see application glitches.
Overcoming the Domino Effect
When working with a monolithic application you can track application functionality with relative ease because you know all of the connections that must occur and understand which services affect a particular application area. When working with microservices, it’s possible for a microservice you don’t even know about to cause a cascade failure. When microservice A fails and there is no available failover, microservice B could also fail, which might bring microservice C to a standstill as well. The domino effect becomes quite evident, as you want your application degrade and fail in the worst sort of way possible.
The best way to combat the domino effect is through constant synthetic (active) monitoring where an application that mimics a user constantly checks the application for potential problems. The synthetic monitoring can detect problems early and provide an appropriate response, such as an alert to the administrator or seeking a microservice failover to take over the load. The point is to be proactive in assuming that failures will occur in order to avoid potential cascade failures.
Microservices are an extremely flexible way to create applications. In addition, they can be speedy when the development team creates them correctly and provide many benefits in the form of code reuse. The problem is creating a solution that performs well because of all of the things that take place under the hood. Using appropriate deployment strategies, standardizing the methods used to interact with individual microservices, and providing appropriate monitoring can all help resolve the problem. The point is to have a good strategy in place before you begin the coding process.
It’s also important to consider your development team. Because microservices tightly integrate into their application environment, you need a development team that is production aware. In addition, the use of multiple languages, various data sources, and potentially a large number of hosting sources, you really need DevOps to help keep everything running properly. Finding developers with DevOps skills can be hard, but well worth the effort in making your applications run well. If your team lacks the appropriate skills, you might find that it’s hard or impossible to locate the source of performance issues even within a single microservice.
Published at DZone with permission of Denis Goodwin , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.