Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using ELK in .NET Applications

DZone's Guide to

Using ELK in .NET Applications

Using of the ELK stack in .NET applications can make log monitoring/collecting easier, and a lot more effective. We'll take a look at how to get this up and running.

· Web Dev Zone ·
Free Resource

Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.

This article is part of a series of articles about modern tooling and techniques for building distributed systems in DotNet.

In our first article, we saw how easy it was to set up a full ELK stack by leveraging pre-built containers. If you don’t already have your ELK server online and configured, please follow the tutorial in Containers For.NET Developers.

In this blog, I show how to leverage ELK (Elastic) in a .NET application and aggregate our logs into a single place. You will see how simple it is to start getting some insights into your application.

Collecting Logs


Before we get into the specifics, let’s first talk about why you might want to aggregate your application logs together. As we continue to build more distributed systems, understanding where there are errors can become like wrangling cats if we have to go to different locations for each service we have running.

By aggregating our logs together, we can have a more unified understanding of what the health of our overall collection of systems is. Having a single source of the truth for the system also helps when it comes to doing detailed analytics on service health over time. It also lets us find problems that impact our end-consumers before they do, so that we can stay ahead of support requests.

Why ELK

The main reason for me is the ease of use and interoperability. That said, there are some downsides. Namely, you have to own your logging story within your organization. You have to own how you want to understand KPIs for your logs. You also have to own all of the infrastructures that go along with that maintenance. Sometimes that is not beneficial, especially from a time and money point of view. We all know time equals money, so if something takes time and money, it takes money.²

If you don’t want to spend a lot of time building out this sort of infrastructure, I would recommend using something like Operations Management Suite, Stackify, or NewRelic. These pre-made solutions will simplify the overhead substantially when it comes to getting off the ground, especially in an existing Ops scenario.

There are also reasons that you might want to own that responsibility; notable examples would be controlling compliances such as HIPAA, SOX, PCI, etc. Likewise, you might have very distinct notions of what success looks like for your organization when it comes to system health, so you want to own the business intelligence that goes along with owning/operating your internal log aggregation solution.

Bottom line: pick the right tool for the job.

Let’s Get Started

For this example, we are going to be using the log4net elastic extension.

Start by creating a new console application.


Now add the log4net Elastic Search Nuget package.

Add the following code section to your application’s App.config.

Note that port will be different on your machine. You want the port that maps to the container’s internal port 9200. In my case, that port is 32769.

For this tutorial, we are not going to do rolling pushes. But in a production environment, this will use the Bulk API for Elastic which will be much more efficient.

Pay attention to the binding redirect and make sure you have the appropriate version of log4net in the new version. The versions will update over time so what is in this tutorial may not be 1-1 correct.

<?xml version="1.0" encoding="utf-8"?>
<configuration>
  <configSections>
    <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net" />
  </configSections>
    <startup> 
        <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.5.2" />
    </startup>
  <runtime>
    <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">
      <dependentAssembly>
        <assemblyIdentity name="log4net" publicKeyToken="669e0ddf0bb1aa2a" culture="neutral" />
        <bindingRedirect oldVersion="0.0.0.0-2.0.8.0" newVersion="2.0.8.0" />
      </dependentAssembly>
    </assemblyBinding>
  </runtime>  
  <log4net>
    <appender name="ElasticSearchAppender" type="log4net.ElasticSearch.ElasticSearchAppender, log4net.ElasticSearch">      
      <connectionString value="Server=localhost;Index=log;Port=32769;" />
      <bufferSize value="0" />
    </appender>
    <root>
      <level value="ALL" />
      <appender-ref ref="ElasticSearchAppender" />
    </root>
  </log4net>
</configuration>


The last step from a configuration standpoint is to let your application know to check the XML configuration for log4net changes. Open your assemblies.cs file and add this line: 

[assembly: log4net.Config.XmlConfigurator(Watch = true)]

 

Edit the Assembly:

Change your program to this:

class Program
    {
        private static readonly ILog _log = LogManager.GetLogger(typeof(Program));

        static void Main(string[] args)
        {
            _log.Error("kaboom!", new ApplicationException("The application exploded"));
        }
    }


You can check that everything is working as expected by running your application and then navigating to the endpoint we set up earlier to Elastic, then appending _cat/indices?v

In my case, it looks like http://localhost:32769/_cat/indices?v

This page will show you what indexes are currently cataloged by Elastic. What you are looking for is the one we defined earlier, log.

Set Up Kibana

We will need to add an index pattern to Kibana to use our new log metrics. Navigate to the Kibana portal. On my machine this is mapped to 32770; it will likely be something different on your machine. You are looking for the external map for internal port 5601.

From the portal, click on Management. Then Index Patterns.

Add the index log* and then you should be able to drop down through the properties and select the timestamp.

Last, go to the Discover tab in Kibana. You should see the log entry from your app.

Success!

That was easy enough right? I hope that shows you how simple it is to start getting some insights into your application.

As this series continues, we will get into using techniques like this along with OWIN middleware to create more robust solutions for your production environments. In the next part of this series, we will be adding a new container to our collection of infrastructure to get insights into cross service calls using Zipkin.

Deploying code to production can be filled with uncertainty. Reduce the risks, and deploy earlier and more often. Download this free guide to learn more. Brought to you in partnership with Rollbar.

Topics:
.net ,elk ,c# ,web dev ,log management

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}