Over a million developers have joined DZone.
Platinum Partner

Still Using Those Old-School Log Files? — Let’s Use a Log Server Instead!

· DevOps Zone

The DevOps Zone is brought to you in partnership with New Relic. Improving the performance of your app is easy with New Relic's SaaS-based monitoring.

Logging to files seems easy.

When you start with a small app, you usually log to the console as long as you are developing it. Of course you are using a proper logging framework, with either log4j or logback at the end. As soon as you are done with your development, you add a few lines to your log4j.xml to also log to a log file so operations is happy.

But leads to manual work later.

When operations knocks on your door with a problem at hand, you ask them for the production log files. They might return to you the next day with an email that contains some attached log files. You start analyzing the files, then the testers knock on your door and report a problem in the test environment. This means you need more log files from operations.

This is a very manual process, and it doesn’t make development, test, and operations happy. Also, someone is going to have to explain to business why it takes so long to investigate problems.

Log server to the rescue!

Wouldn’t it be nice to have immediate access to all log files of development, test, and production systems? Could there be a way to trace a user’s session through all layers, and over cluster nodes? Could everything also be in a nice GUI that's easy and fun to use?

Two years ago we changed the situation by adding a log server to our infrastructure. We have decided to deploy logFaces (http://www.moonlit-software.com) that we found easy to install, easy to integrate, and easy to use.

Setup is easy.

Instead of logging to local files (only), all of our applications now log also to the central log server. So we're now using log4j in our applications, and it's as easy as adding 11 lines to our log4j.xml files. As the transmission of log messages is done in a separate thread, response time is not affected.

The log server stores all log messages in a database. With logFaces, you have the choice of a SQL or no NOSQL database. We have decided to use MongoDB to store the data, as log messages can be of arbitrary length and the total amount of log messages can be limited by size on a database level (this makes housekeeping easy). The log server tags all log messages with the origination server and the domain (for us, the application name, plus a code for the environment and the country).

And you have the data at your fingertips.

Then you start your local logFaces client that connects to the server. It’s a RCP application with an Eclipse look’n’feel. You can watch the global trace in real time, or you can specify custom filters by environment, application, severity and other criteria. The full information of a log4j message is retained, including exception stack traces, class name and context information like MDC/NDC (see http://wiki.apache.org/logging-log4j/NDCvsMDC for more information what NDC and MDC can do for you).

When you look for a historic log event, you can use the query editor to find what you need. After you find what you need, look up additional log messages before that event, and export some or all log messages to the clipboard or to a file.

You can use the history data to generate statistics, or you can easily browse and filter it to find errors in your production logs before your users report them.

It’s a commercial product after all.

You can test it free for 30 days, after that you’ll have to license it. The log server license is 499 USD per server or 1299 USD per site, with an unlimited number of servers for the site. I consider it a fair price, for it allows you to have an unlimited number of applications logging to the server, a log file size that is only restricted by your hardware, and an unlimited number of clients accessing the log server.

Practical things to consider:

We set up up one server for the production environment, and another one for all development and test environments. This way, development and test don’t fill up the log space reserved for production. Also the access restrictions are easier to manage if you set them up per server instead of per application.

Polyglot logging is more than just Java!

We started out with linking our most important Java applications to the log server. After a while, more applications were linked.

Also the .NET world was linked to the log server using NLog. And a Java class also enabled our PL/SQL-stored procedures in the database to log to the log server. As the wire protocol is quite simple (it’s basically XML via TCP), you can link almost any source to the log server.

What’s next?

From my experience, I can say: Give it a try, even for a small application. If your application has multiple layers that you need to trace, and if you need to handle multiple staging environments, it becomes invaluable to have a log server solution that is easy to access.

Due to the direct integration to log4j, it is also a valuable tool during development; for the application you have running locally, you don’t need to look at consoles during development any more.

Last thing: I'd love to hear from you. Please feel free to share your thoughts and approaches in the comments!

The DevOps Zone is brought to you in partnership with New Relic. Know exactly where and when bottlenecks are occurring within your application frameworks with New Relic APM.


{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}