Over a million developers have joined DZone.

4 Steps for Structuring Your Log Data

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.

In the age of BigData we are taught that no pile of data is too large or too complex. This is absolutely true. Most data analysis systems can take any type and volume of data — but ingestion is much different from consumption. The way your data is structured directly impacts its ability to be consumed, understood, and correlated with other data. Here are the top 4 ways to make sure your system and app logs help you to do this effectively.Structuring Your Log Data

Before you start ingesting a new source type, spend a little time to ensure that the effort cost to report on the new information is low. Also, it’s important to have the ability to correlate one log with another so the insights that log data provides is consumable by the entire team.

Here are 4 top steps to deliberate log management adoption:

1. Think about query-ability:

There are thousands and thousands of logs available to you for collection and future analysis. And generally it is very easy to start sucking them in. But you need to ask yourself one simple question: What am I going to do with this data? The results are mixed. Sometimes it’s obvious, but often the obvious can also lead to surprising conclusions.

For example, did you ever think that you would have to correlate a pegged server process with a new user on the system? It’s not an uncommon test when new application registrations have a lengthy provisioning process. And Ops needs to know where and how to distribute such loads.

When can this cause serious issues? When your organization has a special event, and there are a lot of new registrations in a short amount of time. I’ve seen this happen more than once. It has brought servers to their knees. And causes a lot of embarrassment. But if queries were set-up to identify this trend, it’s load balancing could be established early on.

This is an example of a use case or question that operations should have asked, before these logs were added to the system.

2. Align standard naming conventions:

Thinking ahead also means that there needs to be a consistent language. This usually presents itself when it comes to asset, and component names. Values of these need to be consistent across all logs. For example if you call servers “nodes,” or name them based on server type. These names need to be represented every time a particular server is referenced. Or are you going to reference configuration script runs by their name, or by the version of the environment build? By making sure names are consistent, not only is it easy to query on named assets and components, it’s easier to communicate across teams when activities do not 100% align, but the same assets are involved.

3. Avoid nested documents:

Really nested documents are impossible to avoid. And they provide a huge value in logs where nested data creates an automatic correlation between objects. The trick is that nested data creates nested queries. Which increases query complexity. This is no problem for your log analysis system. It is a greater problem for the people using it. Individuals can easily get confused about nested objects, and easily mis-interpret them.

There are a few options to mitigate this. You can explode your logs, but you will lose some value. Or you can create better references to critical data in the parent document, but this creates duplication. Both have pros and cons. What you choose will depend on the log. You might have a combination of both solutions, or perhaps one that I did not even name here.

4. Find the false positives:

There is another thing nested documents, and all logs might contain. And that is replicated data, where the key’s repeat throughout sections of the log. This is a problem in any information architecture type project.

When you have repeating keys, it can be especially easy to confuse one key with another intended one in full-text search, but also in queries. Again no problem for the log platform, but humans can easily get confused during interpretation. And the net result can be false positives. One approach to avoid this is to eliminate them. But this is often not even a choice. Next approach is to just be aware, and use caution when you think it is a potential.

One thing I imply throughout this post is that you may want to reformat your system logs. But reformatting logs is a lot of effort. It adds an additional step in the process, which is another point of possible failure. It is not a recommended first approach. Ideally you are confident in your teams ability to be consistent, and aware, which makes reformatting unnecessary. When it comes to software logs however, it is a lot easier, because often your developers have control. And the team should spend the time to plan out how these logs should look.

Some of these suggestions also imply a set of rules or strategies for your logs. And those rules or strategies need to be socialized. There are several ways to do this. You can create a library of standard queries that contain common elements, such as user, or system queries. And some log management services also offer the ability to save searches so they are accessible for repeated use, and shared usage across a team. Or you can document the rules and strategies. Unfortunately published documentation is easy to avoid. So this is less than ideal but sometimes necessary, depending on the culture and size of the team. An easier, but harder to measure approach is to use a consistent spoken language. Push your team to talk in terms that map to queries and things everyone understands. Such as server naming conventions, and settings. It takes a lot of consistent pressure on the team but very helpful no matter what. And finally training; more formal sessions teaching all users on strategies and approaches to the log platform.

Log analysis encourages you to dive right in, and you should. But dive in with an idea of what you are going to do after. How are you going to use the logs? What questions are you going to ask regularly? How are you going to report on the data to the rest of the organization so that there is a consistent understanding?

Most log analysis platforms can take whatever you throw at it. By being deliberate about what you feed it, you are ensuring the greatest query efficiency, helping with better reporting, and avoiding a lot of frustration.

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

Topics:

Published at DZone with permission of Trevor Parsons, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}