Docker Log Driver Alternatives - Side by Side
Docker Log Driver Alternatives - Side by Side
Looking for some alternatives to the two standard Docker log drivers? Check out this comparison to find the right one for you.
Join the DZone community and get the full member experience.Join For Free
Let’s look at two recommended Docker API-based log collection tools: Logspout and Sematext Docker Agent. Both are open source. Please note a third tool, which fits more or less in this category is Elastic Filebeat. However, note that Filebeat collects container log files generated by the json-file log driver and only the log enrichment with container metadata is done via Docker API calls. Logspout provides multiple outputs and can route logs from different containers to different destinations without changing the application container logging settings. It handles ANSI escape sequences (like color codes in logs), which could be problematic for full-text search in Elasticsearch.
Like Logspout, Sematext Docker Agent (SDA) is API-based, supports log routing and handles ANSI escape sequences for full-text search. However, Sematext Docker Agent is actually more than just a simple log shipper.SDA also takes care of many issues raised by Docker users such as:
- multi-line logs
- log format detection and log parsing
- complete metadata enrichment containers (labels, geoip, Swarm and Kubernetes specific metadata)
- masking of sensitive data in logs
- disk buffering and reliable shipping via TLS
It is open source on Github, can be used with the Elastic Stack or Sematext Cloud and can collect not just container logs, but also container events, plus Docker host and container metrics. In other words, it’s a Docker monitoring agent as well as container events and log collector, parser, and shipper. The following comparison table shows the differences between these 3 Docker logging solutions that work well with the json-file driver and Docker Remote API.
|Collect container logs when json-file driver is used||Yes||Yes||Yes|
|Collect container logs when journald driver is used||No||Yes||Yes|
|Enriches logs with container metadata json-file journald||Yes No||Yes Yes||Yes Yes|
|Log routing by metadata to different destinations||No||Yes||Yes|
|Multiline support||Yes, but limited||No||Yes|
|Disk buffer (when log destination is not reachable)||No||No||Yes|
|Integrated log parser per image type||No||No||Yes|
|Automatic log format detection and parsing||No||No||Yes|
|Log enrichment for Geo-IP||No||No||Yes|
|Masking sensitive data fields in parsed logs||No||No||Yes (hash or remove)|
|Container event collection (start, stop, kill, …)||No||No||Yes|
|Docker Hub image||No||Yes||Yes|
|Container metrics collection||No||No||Yes|
|Docker certified image (Docker Store)||No||No||Yes|
|Red Hat certified image||No||No||Yes|
|Setup templates (UI/copy paste) for cluster wide installation||No||No||Yes Helm, K8S, Swarm, Portainer, Rancher|
The comparison table above is based on the following details we evaluated for each tool.
|Log collection||Collects Docker log files, generated by json-file driver. Enrichment with container metadata (name, image, labels) via Docker API. Logs can be forwarded to Elasticsearch, Kafka, Logstash or Redis.|
|Log routing||No log routing (different destination/index for different containers). Limited to single log destination and single Elasticsearch index.|
|Multiline support||Multi-line support. A regular expression can be specified globally to match multiline messages. Actually, this is a limitation because different containers might have different multi line formats, which would require a definition of message
separators per container.
|Filter||Filters for docker metadata (container name, image name and container ID) can be defined.|
|Disk buffer||No support for disk buffer. Logs might be lost when delivery to Elasticsearch or Logstash fails.|
|Log parser||Only JSON log parser in a static configuration used to read docker json-file logs. Docker messages content in this json file is not parsed. Direct output to Elasticsearch results in unparsed logs. Logs must be shipped to a separate Logstash instance or to an Elasticsearch ingest node, having a processing pipeline defined for parsing various container log formats. Note: Filbeat modules are available and is not clear how modules could be applied to specific containers, typically modules take a log file path as input, which can’t be used to differentiate various container log formats. So parsing might need to be handled in Elasticsearch ingest nodes.|
|Image Registry||No official image available on Docker Hub. Elastic hosts the Filebeat image in the elastic registry: docker.elastic.co/beats/filebeat Various 3rd party images are available on Docker Hub.|
|Log collection||Collects logs via Docker API including container metadata. Forwarding to Syslog or HTTP destinations. 3rd party output modules are available for Apache Kafka, Logstash, Redis-Logstash, and Gelf.|
|Log routing||Log routing supported. Multiple destinations can be specified by label filters to select logs for each destination.|
|Multiline support||No multi line support.|
|Filter||Filtering to match container labels with wildcards.|
|Disk buffer||No support for disk buffers. Logs might be lost when delivery fails.|
|Log parser||No log parser.|
|Image Registry||Open source image available on Docker Hub.|
|Feature||Sematext Docker Agent|
|Log collection||Collects Docker logs, Docker events, and metrics directly from Docker API. Log enrichment with container metadata, Docker Swarm metadata, Kubernetes metadata, labels, environment variables and GeoIP information. Logs are forwarded via Elasticsearch
|Log routing||Log routing by container labels or environment variables to specify Elasticsearch destination index or Sematext Cloud App. Very flexible with global defaults and individual rules.|
|Multiline support||Out of the box multi-line support, catching most stack traces or any log messages with indentation. The default regular expression is configurable. In addition, custom message separators e.g., date patterns at begin of log messages can be specified via pattern definitions per log source (matching container image or container name).|
|Filter||Filtering with via and blacklists via regular expressions matching container ID, container name or image name. In addition, containers can be labeled to enabled/disable log collection combined with global defaults (collect all logs or collect no logs without having explicit logging “enabled” label on the application container).|
|Disk buffer||Disk buffer supported. SDA stores and retransmits logs in case of failed delivery to the Elasticsearch API. Disk buffer limits can be configured. Oldest logs get dropped when disk buffer limits are reached.|
|Log parser||Comprehensive log parser with default log format recognition for JSON and parsing rules for various official images like Nginx, Apache, MongoDB, HBase, Cassandra, Elasticsearch, etc. Individual log parser, filter and transformation rules can be specified in a configuration file or via URL (e.g. Github Gist). IP-Address fields can be enriched with Geo-IP data. Sensitive data fields can be masked/anonymized by replacing the value with a hash code. In addition sensitive data fields could be removed from logs, before the data is shipped to the log storage.|
|Image Registry||Open source image on Docker Hub. Docker Certified image in the Docker Store. Red Hat certified image available in the Red Hat Container Catalog.
The clear recommendation for API-based loggers might change in the future as Docker log drivers improve over time and the new plugin mechanism via Unix socket allows new logging driver implementations to run as separate processes. The release of the new Docker logging plugin architecture is a good sign that Docker takes logging issues seriously. Logging vendors need some time to implement their drivers based on the new plugin architecture. In the meantime, consider Docker API-based log collectors like Sematext Docker Agent and Logspout to avoid running into issues with Docker logs, like the 10 Docker logging gotchas.
Don’t forget to download the Cheat Sheet you need. Here they are:
Then, you should think about not only collecting logs, but also host and container metrics, and events. In this sense, we’ve prepared a reference architecture document where you will find out about all key Docker metrics to watch. Following that, you will learn how to set up monitoring and logging for a Docker Enterprise Cluster.
Published at DZone with permission of Stefan Thies , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.