5 Docker Logging Best Practices
5 Docker Logging Best Practices
Containers offer a scalable way to run software when moving between environments. Docker is the most well-known infrastructure; read about best practices here.
Join the DZone community and get the full member experience.Join For Free
The Nexus Suite is uniquely architected for a DevOps native world and creates value early in the development pipeline, provides precise contextual controls at every phase, and accelerates DevOps innovation with automation you can trust. Read how in this ebook.
Containers have become a huge topic in IT, and especially in DevOps, over the past several years. Simply stated, containers offer an easy and scalable way to run software reliably when moving from one environment to another. Containers do this by providing an entire runtime environment in one package, which includes the application, plus all dependencies, libraries and other binaries, and configuration files needed to run it.
Closely aligned with containers are “microservices” which represent a more agile way of developing applications. A microservices architecture structures an application as a set of loosely coupled services connected via functional APIs that handle discrete business functions. Instead of a large monolithic code base, microservices primarily offer a “divide and conquer” approach to application development.
Leading the charge in the world of container infrastructures is Docker, a platform for deploying containerized software applications. The real value of containers is that they allow teams to spin up a full runtime environment on the fly. Docker is arguably the most influential platform today for getting businesses to adopt microservices.
Similar to how virtual machines streamlined software development and testing by providing multiple instances of an OS to end-users from one server, containers add an extra abstraction layer between an application and the host OS. The big difference is that containers don’t require a hypervisor and only run one instance of an operating system; overall, this equates to far less memory and faster run time.
As with developing any application, logging is a central part of the process and especially useful when things go wrong. But logging in the world of containerized apps is different than with traditional applications. Logging Docker effectively means not only logging the application and the host OS, but also the Docker service.
There are a number of logging techniques and approaches to keep in mind when working with Dockerized apps. We outline the top five best practices in more detail below.
In an application-based approach, the application inside the containers uses a logging framework to handle the logging process. For instance, a Java application might use Log4j2 to format and send log files to a remote server and bypass the Docker environment and OS altogether.
While application-based logging gives developers the most control over the logging event, the approach also creates a lot of overhead on the application process.
This approach might be useful for those who are working within more traditional application environments since it allows developers to continue using the application’s logging framework (i.e., Log4j2) without having to add logging functionality to the host.
Using Data Volumes
Containers by nature are transient, meaning that any files inside the container will be lost if the container shuts down. Instead, containers must either forward log events to a centralized logging service (such as Loggly) or store log events in a data volume. A data volume is defined as “a marked directory inside of a container that exists to hold persistent or commonly shared data.”
The advantage of using data volumes to log events is that since they link to a directory on the host, the log data persists and can be shared with other containers. The advantage of this approach is that it decreases the likelihood of losing data when a container fails or shuts down.
Instructions for setting up a Docker data volume in Ubuntu can be found here.
Docker Logging Driver
A third approach to logging events in Docker is by using the platform’s logging drivers to forward the log events to a syslog instance running on the host. The Docker logging driver reads log events directly from the container’s stdout and stderr output; this eliminates the need to read to and write from log files, which translates into a performance gain.
However, there are a few drawbacks to using the Docker logging driver: 1) it doesn’t allow for log parsing, only log forwarding 2) Docker log commands work only with log driver JSON files 3) containers terminate when the TCP server becomes unreachable.
Instructions for configuring the default logging driver for Docker may be found here.
Dedicated Logging Container
This approach has the primary advantage of allowing log events to be managed fully within the Docker environment. Since a dedicated logging container can gather log events from other containers, aggregate them, then store or forward the events to a third-party service, this approach eliminates the dependencies on a host.
Additional advantages of dedicated logging containers are: 1 ) automatically collect, monitor, and analyze log events 2) scale your log events automatically without configuration 3) retrieve logs through multiple streams of log events, stats, and Docker API data.
Sidecars have become a popular approach to managing microservices architectures. The idea of a sidecar comes from the analogy of a how a motorcycle sidecar is attached to a motorcycle. To quote one source, “A sidecar runs alongside your service as a second process and provides ‘platform infrastructure features’ exposed via a homogeneous interface such as a REST-like API over HTTP.”
From a logging standpoint, the advantage of a sidecar approach is that each container is linked to its own logging container (the application container saves the log events and the logging container tags and forwards them to a logging management system like Loggly).
The sidecar approach is especially useful for larger deployments where more specialized logging information and custom tags are necessary. Though, setting up sidecars are more notably complex and difficult to scale.
Published at DZone with permission of Jeffrey Walker . See the original article here.
Opinions expressed by DZone contributors are their own.