Learning From 60k Customer Searches About Logging in JSON

DZone 's Guide to

Learning From 60k Customer Searches About Logging in JSON

This article outlines the eight best practices that will give you the most value from your logs.

· DevOps Zone ·
Free Resource

JSON is the most popular log type used by Loggly customers because it makes it relatively easy for you to benefit from Loggly’s automated parsing and analytics; and selfishly, it makes sense as we handle JSON better than anyone else.  JSON allows you to treat your logs as a structured database on which you can easily filter and report, even if there are millions of events. Because JSON is flexible, you can add and remove fields on the fly and know that Loggly will automatically pick them up.

The chart below shows the JSON fields on which Loggly customers perform the highest number of searches based on a sample of over 60,000 searches.

Image title

Our customers are getting a lot of value from this data. Below, I outline eight best practices that will give you more value as well.

1. Include a Level or Severity Field in Your Logs

Using json.level or json.severity allows you to instantly filter your logs to show only the events that are errors or warnings. From there, it’s much easier to see patterns through Loggly’s filters or trends charts. You can filter down to your errors just by selecting the level field in our Loggly Field Explorer, then clicking on error.

Image title

2. Put Unstructured Text in an Easily Searched Field Such as json.message

Many of Loggly’s libraries put the unstructured portion of your logs into the json.message field. That’s why this field is the second most commonly searched among Loggly customers.

3. Group and Filter Events by Type, User Action, or Other Criteria

In our own service, Loggly uses json.action to track logs related to a user action such as viewing a page or clicking on a search. If we only care about searches we can filter to just see those events. In addition, json.tags can be used to mimic the behavior of Loggly’s tags, except from within the event itself. This lets you add them to trend views and even archive to Amazon S3 when you no longer want to retain it in Loggly.

4. Track From Which Deployment or Environment Each Event Came 

Setting up Source Groups in Loggly is easy, but if you’re not able to leverage Loggly source groups for your log data, you can use a field like json.deployment to tell which logs are coming from QA, staging, and production. For example, you may not care that certain components are slow in QA, but that same level of performance can create a huge customer satisfaction hit if it’s happening on production. By using json.deployment to isolate the issue, troubleshooting production problems just got even easier.

5. Monitor Success Rates of Requests or Transactions

Use a field like json.status to give you a pass or fail status that can serve as an easy proxy for the health of your transaction pipeline. You’ll want a distinct success or failure message for each request or transaction, which can be more meaningful than an overall error count. Create a pie chart showing how many requests or transactions passed and failed over a given time period and put it on your Loggly dashboard. You’ll be able to tell at a glance whether everything is fine or your application needs immediate attention.

Loggly Chart json.status

6. Log the Name of the Program or Application Sending the Event

If you have multiple programs running simultaneously and aren’t using syslog, use json.program to look at the logs for one single program. This field tells you from which program or application a given log was generated. It’s similar to syslog.appname and it’s a way to replicate the behavior of that field without using syslog.

7. Trace Problems Back to the Source IP Address

A field like json.sourceip provides an easy way to see the source IP that’s making a particular request, whether it’s a user or another service in your application stack. If there are too many requests or errors from a given IP, the problem could be with that specific machine.

8. Be Ready to Find the Logs for Each Customer

You may find it useful to include an account ID or username as a way to diagnose customer-specific problems. If your support team gets a tweet or email from a customer, they can quickly isolate the logs for that customer to see exactly what happened. Transaction tracing using account ID or username is a key value of log management across our customer base, and JSON makes it easier to expose, trace, and set off alerts.

Start Logging in JSON Now

The JSON fields I discuss above are popular for good reason: they hold the keys to solving many operational problems. Loggly takes advantage of JSON logging not only to help you react faster, but also to make you more proactive with log management, so you’re solving problems before they affect customers. If you’re not already logging in JSON, I hope you have walked away from this post with some good reasons to start now. 

backend ,customer ,customers ,dynamic ,json ,loggly ,logs ,popular ,real-time

Published at DZone with permission of Jason Skowronski , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}