Docker Selects New Relic’s App-Oriented Container Monitoring for New Ecosystem Technology Program
Join the DZone community and get the full member experience.Join For Free
[This article was written by Andrew Marshall.]
As Docker has exploded in popularity among both development and operations teams, so too has the Docker ecosystem. Such rapid growth can make tracking all the available solutions feel like a full-time job, which is why we’re happy to see Docker listing proven container monitoring solutions as part of the Docker Ecosystem Technology Program. New Relic is proud to be on this list, since it advances our desire to help customers make the most of container technology.
This announcement is the latest step in our long history with Docker. (Well, about as long as it can be with a technology that’s barely two years old!) That history includes using Docker internally since version 0.7 in late 2013; presenting at DockerCon 2014 on how we use Docker in production; releasing Centurion, an open-source mass deployment tool for Docker fleets; and just last month, announcing our open beta of Docker monitoring and the release of Docker Up & Running, a book written by two of our engineers.
Docker usage patterns
So why should Docker users care? To explain, let’s discuss the two primary ways that people use Docker containers, and what their monitoring needs are.
Use case 1: Short-term containers. In this scenario, containers are used to package an application or even a simple script. These containers are often short lived; for example, existing just for the length of time needed to execute a single batch or cron job. Because Docker containers can often be started in less than a second, it is feasible to use them in this manner.
Use case 2: Long-term containers. Containers can also function as virtual-machine replacements. In this scenario, Docker containers are used to reduce infrastructure complexity, acting as another abstraction level and reducing the need for manually configured servers. These containers typically have a longer lifespan and therefore a greater need for a first-class monitoring environment.
In both use cases, Docker provides increased application portability, security, and flexibility. These benefits are why, on a typical day, an operations team could spin up hundreds of Docker containers, of which perhaps a dozen or so could experience performance issues. This could be due to high CPU usage, high memory consumption, or other issues. Sysadmins need to be able to not only identify which containers are causing the trouble, but prioritize which to troubleshoot first. So how to determine which containers to look at first?
App context = ops efficiency
To prioritize properly, you need to know which apps are running in each container. For example, if a container is supporting a non-critical app—say, last year’s holiday party registration—you can give the container a lower priority. However, if the container supports an app that drives your company’s revenue, obviously it deserves a higher priority.
We think that’s the only rational way to prioritize container performance problems is to put them into the context of an app, because application context leads to ops team efficiency, as it focuses the team on the most critical container issues. Without it, your team is not focused on the problems that most affect your business. And since we believe that every business is a software business, that lack of focus can have disastrous results.
For this reason, it makes sense to investigate performance problems starting with an app that is experiencing issues, and then drill down to the containers it runs on to see if the problem is a resource allocation one. Our experience shows that this is most often the case.
Less often, we see a server (a physical server or virtual machine) consuming excessive CPU and/or memory, and then figure out which containers are running on it. In such a scenario, we may balance containers out to different servers or rebalance resource allocations.
However, in both cases, you start elsewhere and then use container monitoring to reveal the full picture.
How it works
As New Relic software engineer Adam Larson explains in his Docker public beta blog post, “New Relic users who have Docker in their appstack can now enjoy the same navigational features and visibility as those using bare metal hardware or traditional virtualization. This allows you to start with the application, and from there monitor the container, and finally the server.” The same New Relic APM monitoring interface you use for the rest of your infrastructure works just as well with Docker containers, giving you the right level of information you need for either of the use cases described above.
As we’ve trialed our Docker monitoring integration with customers, we’re working to provide them the right level of application visibility—and to continually improve our understanding of how they use Docker and how we can meet their monitoring needs.
“Identifying and addressing issues with a container has become smoother with New Relic. We’re able to trace a performance issue through the application to identify the problem, and if it is with a particular container, kill it and spin up a new one.” —Scott Rankin, vice president of technology, mobile workforce management provider Motus
To learn more about bringing app context to your Docker monitoring, follow this tutorial on how to set up New Relic Servers to work with Docker containers.
Published at DZone with permission of Fredric Paul, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Managing Data Residency, the Demo
What I Learned From Crawling 100+ Websites
AWS Multi-Region Resiliency Aurora MySQL Global DB With Headless Clusters
Does the OCP Exam Still Make Sense?