Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Playing Around With Metricbeat and Elastic Stack 5.0

DZone's Guide to

Playing Around With Metricbeat and Elastic Stack 5.0

The release of Elastic Stack 5.0 — the new name for the ELK Stack — has finally been announced.

· Integration Zone
Free Resource

Modernize your application architectures with microservices and APIs with best practices from this free virtual summit series. Brought to you in partnership with CA Technologies.

metricbeat elastic stack

After a long wait, the greatly anticipated release of Elastic Stack 5.0 — the new name for the ELK Stack — was announced. (You can see our guide on installing the Elastic Stack beta here.)

In the next couple of weeks, we will start to take a closer look at some of the new features.

Since I’ve already covered a number of ways to monitor system metrics with ELK, I want to begin with trying out Metricbeat, a revamped version of Topbeat.

As its name implies, Metricbeat collects a variety of metrics from your server (i.e., operating system and services) and ships them to an output destination of your choice. These destinations can be ELK components such as Elasticsearch or Logstash or other data processing platforms such as Redis or Kafka.

Setting Up the EMK Stack (Elasticsearch, Metricbeat, and Kibana)

We’ll start by installing the components we’re going to use to construct the logging pipeline — Elasticsearch to store and index the data, Metricbeat to collect and forward the metrics, and Kibana to analyze them (Logstash has begun its retreat from the stack, something we will discuss in a future article).

If you already have these components installed, feel free to skip to the next step.

Installing Java

First, we need Java 8:

$sudo add-apt-repository ppa:webupd8team/java

$sudo apt-get update

$sudo apt-get install oracle-java8-installer

You can verify using this command:

$java-version

java version"1.8.0_111"

Java(TM)SE Runtime Environment(build1.8.0_111-b14)

Java HotSpot(TM)64-Bit Server VM(build25.111-b14,mixed mode)

Installing Elasticsearch and Kibana

Next up, we’re going to download and install the public signing key for Elasticsearch:

$wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:

$echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Update the system, and install Elasticsearch:

$sudo apt-get update && sudo apt-get install elasticsearch

Run Elasticsearch using:

$sudo service elasticsearch start

You can make sure Elasticsearch is running using the following cURL:

$curl "http://localhost:9200"

You should be seeing an output similar to this:

 "name":"GLOA3NX",

 "cluster_name":"elasticsearch",

 "cluster_uuid":"C4gM3wLFR9e4br_NQ0ksKQ",

 "version":{

   "number":"5.0.0",

   "build_hash":"253032b",

   "build_date":"2016-10-26T05:11:34.737Z",

   "build_snapshot":false,

   "lucene_version":"6.2.0"

 "tagline":"You Know, for Search"

Next up, we’re going to install Kibana with:

$sudo apt-get install kibana

To verify Kibana is connected properly to Elasticsearch, open up the Kibana configuration file at: /etc/kibana/kibana.yml, and make sure you have the following configuration defined:

server.port:5601

elasticsearch.url:"http://localhost:9200"

And, start Kibana with:

$sudo service kibana start

Installing Metricbeat

Our final installation step is installing Metricbeat. To do this, you will first need to download and install the Elasticsearch public signing key.

$wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Next, save the repository definition to /etc/apt/sources.list.d/elastic-5.x.list:

$echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main"|sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list

Then, update your system and install Metricbeat:

$sudo apt-get update && sudo apt-get install metricbeat

Configuring the Pipeline

Now that we’ve got all the components in place, it’s time to build the pipeline. So, our next step involves configuring Metricbeat — defining what data to collect and where to ship it to.

Open the configuration file at /etc/metricbeat/metricbeat.yml.

In the Modules configuration section, you define which system metrics and which service you want to track. Each module collects various metric sets from different services (i.e., Apache, MySQL). These modules, and their corresponding metric sets, need to be defined separately. Take a look at the supported modules here.

By default, Metricbeat is configured to use the system module which collects server metrics, such as CPU and memory usage, network IO stats, and so on.

In my case, I’m going to uncomment some of the metrics commented out in the system module, and add the apache module for tracking my web server.

At the end, the configuration of this section looks as follows:

-module:system

 metricsets:

   -cpu

   -load

   -core

   -diskio

   -filesystem

   -fsstat

   -memory

   -network

   -process

 enabled:true

 period:10s

 processes:['.*']

-module:apache

 metricsets:["status"]

 enabled:true

 period:1s

 hosts:["http://127.0.0.1"]

Next, you’ll need to configure the output, or in other words where you’d like to send all the data.

Since I’m using a locally installed Elasticsearch, the default configurations will do me just fine. If you’re using a remotely installed Elasticsearch, make sure you update the IP address and port.

output.elasticsearch:

 hosts:["localhost:9200"]

If you’d like to output to another destination, that’s fine. You can ship to multiple destinations or comment out the Elasticsearch output configuration to add an alternative output. One such option is Logstash, which can be used to execute additional manipulations on the data and as a buffering layer in front of Elasticsearch.

Once done, start Metricbeat with:

$sudo service metricbeat start

You should get the following output:

016/11/0211:38:35.026027beat.go:264:INFO Home path:[/usr/share/metricbeat]Config path:[/etc/metricbeat]Data path:[/var/lib/metricbeat]Logs path:[/var/log/metricbeat]

2016/11/0211:38:35.026072beat.go:174:INFO Setup Beat:metricbeat;Version:5.0.0

2016/11/0211:38:35.026192output.go:167:INFO Loading template enabled.Reading template file:/etc/metricbeat/metricbeat.template.json

2016/11/0211:38:35.026292logp.go:219:INFO Metrics logging every30s

2016/11/0211:38:35.028538output.go:178:INFO Loading template enabled forElasticsearch2.x.Reading template file:/etc/metricbeat/metricbeat.template-es2x.json

2016/11/0211:38:35.030666client.go:107:INFO Elasticsearch url:http://localhost:9200

2016/11/0211:38:35.030741outputs.go:106:INFO Activated elasticsearch asoutput plugin.

2016/11/0211:38:35.030840publish.go:291:INFO Publisher name:ip-172-31-25-148

2016/11/0211:38:35.030948async.go:63:INFO Flush Interval set to:1s

2016/11/0211:38:35.030968async.go:64:INFO Max Bulk Size set to:50

2016/11/0211:38:35.031054metricbeat.go:25:INFO Register[ModuleFactory:[system],MetricSetFactory:[apache/status,haproxy/info,haproxy/stat,mongodb/status,mysql/status,nginx/stubstatus,postgresql/activity,postgresql/bgwriter,postgresql/database,redis/info,redis/keyspace,system/core,system/cpu,system/diskio,system/filesystem,system/fsstat,system/load,system/memory,system/network,system/process,zookeeper/mntr]]

Config OK

Not getting any errors is great, and another way to verify all is running as expected is to query Elasticsearch for created indices:

$curl http://localhost:9200/_cat/indices?v

health status index                uuid                  pri rep docs.count docs.deleted store.size pri.store.size

yellow open  metricbeat-2016.11.02gdQIYsr9QRaAw3oJQGgVTA  5  1       924           0   843.7kb       843.7kb

Analyzing the Data in Kibana

Our last and final step is to understand how to analyze and visualize the data to be able to extract some insight from the logged metrics.

To do this, we first need to define a new index pattern for the Metricbeat data.

In Kibana (http://localhost:5601), open the Management page and define the Metricbeat index in the Index Patterns tab (if this is the first time you’re analyzing data to Kibana, this page will be displayed by default):

configure index pattern kibana

Select @timestamp as the time-field name and create the new index pattern.

Opening the Discover page, you should see all the Metricbeat data being collected and indexed.

metricbeat data in log file

If you recall, we are monitoring two types of metrics: system metrics and Apache metrics. To be able to differentiate the two streams of data, a good place to start is by adding some fields to the logging display area.

Start by adding the metricset.module and metricset.name” fields.

metricset module and name

Visualizing the Data

Kibana is notorious for its visualization capabilities. As an example, let’s create a simple visualization that displays CPU usage over time.

To do this, open the Visualize page and select the Line Chart visualization type.

We’re going to compare, over time, the user and kernel space. Here is the configuration and the end-result:

user and kernal configuration

Now, luckily for us Elastic created an easy way to get started with building visualizations of the data by providing us with a way to download a Metricbeat dashboard. This will save us the time of figuring out how to build visualizations, a task that can be fun but can also consume quite a lot of time if you’re new to Kibana.

Note: If you’re using Logz.io, you’ll find a pre-made Metricbeat dashboard in ELK Apps — our library of pre-made visualizations, dashboards, alerts and searches for various data types.

To use the dashboard, CD into the Metricbeat installation folder and execute the installation script:

$cd/usr/share/metricbeat/

$./scripts/import_dashboards

After the script downloads all the dashboards, all you have to do is open up the Dashboard page, select Open, and select which dashboard you’d like to use.

metricbeat dashboard kibana

In Summary

Playing around with new technology in a sandbox environment is always fun and worry-free. Deploying in production is an entirely different ballgame, and it’s no wonder we meet ELK users still using Elasticsearch 1.x. Still, Elastic Stack 5.0 is a major improvement from the previous version, both from a user experience perspective and a performance and stability perspective.

The Integration Zone is proudly sponsored by CA Technologies. Learn from expert microservices and API presentations at the Modernizing Application Architectures Virtual Summit Series.

Topics:
integration ,metricbeat ,elasticstack ,kibana

Published at DZone with permission of Daniel Berman, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}