Over a million developers have joined DZone.

How to Install the ELK Stack on Google Cloud Platform

DZone's Guide to

How to Install the ELK Stack on Google Cloud Platform

In this article, I will guide you through the process of installing the ELK Stack (Elasticsearch 2.2.x, Logstash 2.2.x and Kibana 4.4.x) on Google Cloud Platform (GCP).

· Cloud Zone ·
Free Resource

Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.

google cloud elk stack

In this article, I will guide you through the process of installing the ELK Stack (Elasticsearch 2.2.x, Logstash 2.2.x and Kibana 4.4.x) on Google Cloud Platform (GCP).

While still lagging far behind Amazon Web Services, GCP is slowly gaining popularity, especially among early adopters and developers — but also among a number of enterprises. Among the reasons for this trend are the full ability to customize virtual machines before provisioning them, positive performance benchmarking compared to other cloud providers, and overall reduced cost.

These reasons caused me to test the installation of the world’s most popular open source log analysis platform, the ELK Stack, on this cloud offering. The steps below describe how to install the stack on a vanilla Ubuntu 14.04 virtual machine and establish an initial pipeline of system logs. Don’t worry about the costs of testing this workflow — GCP offers a nice sum of $300 for a trial (but don’t forget to delete the VM once you’re done!).

Setting up your environment

For the purposes of this article, I launched an Ubuntu 14.04 virtual machine instance in GCP’s Compute Engine. I enabled HTTP/HTTPS traffic to the instance and changed the default machine type to 7.5 GB.

Also, I created firewall rules within the networking console to allow incoming TCP traffic to Elasticsearch and Kibana ports 9200 and 5601 respectively.

tcp traffic to elasticsearch and kibana

Installing Java

All of the packages we are going to install require Java, so this is the first step we’re going to describe (skip to the next step if you’ve already got Java installed).

Use this command to install Java:

Verify that Java is installed:

$ java -version

If the output of the previous command is similar to this, you’ll know that you’re on track:

Installing Elasticsearch

Elasticsearch is in charge of indexing and storing the data shipped from the various data sources, and can be called the “heart” of the ELK Stack.

To begin the process of installing Elasticsearch, add the following repository key:

Add the following Elasticsearch list to the key:

And finally, install:

Before we start the service, we’re going to open the Elasticsearch configuration file and define the host on our network:

In the Network section of the file, locate the line that specifies the ‘network.host’, uncomment it, and replace its value with “”:


Last but not least, restart the service:

To make sure that Elasticsearch is running as expected, issue the following cURL:

$ curl localhost:9200

If the output is similar to the output below, you will know that Elasticsearch is running properly:

Production tip: DO NOT open any other ports, like 9200, to the world! There are bots that search for 9200 and execute groovy scripts to overtake machines.

Logstash Installation

Moving on, it’s time to install Logstash — the stack’s log shipper.

Using Logstash to parse and forward your logs into Elasticsearch is, of course, optional. There are other log shippers that can output to Elasticsearch directly, such as Filebeat and Fluentd, so I would recommend some research before you opt for using Logstash.

Since Logstash is available from the same repository as Elasticsearch and we have already installed that public key in the previous section, we’re going to start by creating the Logstash source list:

$ echo 'deb http://packages.elastic.co/logstash/2.2/debian stable main' | sudo tee /etc/apt/sources.list.d/logstash-2.2.x.list

Next, we’re going to update the package database:

$ sudo apt-get update

Finally — we’re going to install Logstash:

To start Logstash, execute:

And to make sure Logstash is running, use:

The output should be:

logstash is running

We’ll get back to Logstash later to configure log shipping into Elasticsearch.

Kibana Installation

The process for installing Kibana, ELK’s pretty user interface, is identical to that of installing Logstash.

Create the Kibana source list:

Update the apt package database:

$ sudo apt-get update

Then, install Kibana with this command:

Kibana is now installed.

We now need to configure the Kibana configuration file at /opt/kibana/config/kibana.yml:

Uncomment the following lines:

Last but not least, start Kibana:

You should be able to access Kibana in your browser at http://<serverIP>:5601/ like this:

access kibana in browser

By default, Kibana connects to the Elasticsearch instance running on localhost, but you can connect to a different Elasticsearch instance instead. Simply modify the Elasticsearch URL in the Kibana configuration file that we had edited earlier (/opt/kibana/config/kibana.yml) and then restart Kibana.

If you cannot see Kibana, there is most likely an issue with GCP networking or firewalls. Please verify the firewall rules that you defined in GCP’s Networking console.

Establishing a Pipeline

To start analyzing logs in Kibana, at least one Elasticsearch index pattern needs to be defined (you can read more about Elasticsearch concepts) — and you will notice that since we have not yet shipped any logs, Kibana is unable to fetch mapping (as indicated by the grey button at the bottom of the page).

Our last and final step in this tutorial is to establish a pipeline of logs, system logs in this case, from syslog to Elasticsearch via Logstash.

First, create a new Logstash configuration file:

Use the following configuration:

A few words on this configuration.

Put simply, we’re telling Logstash to store the local syslog file ‘/var/log/syslog’ and all the files under ‘/var/log*.log’ on Elasticsearch.

The input section specifies which files to collect (path) and what format to expect (syslog). The output section uses two outputs — stdout and elasticsearch.

I left the filter section empty in this case, but usually this is where you would define rules to beautify the log messages using Logstash plugins such as grok. Learn more about Logstash grokking.

The stdout output is used to debug Logstash, and the result is nicely-formatted log messages under ‘/var/log/logstash/logstash.stdout’. The Elasticsearch output is what actually stores the logs in Elasticsearch.

Please note that in this example I am using ‘localhost’ as the Elasticsearch hostname. In a real production setup, however, it is recommended to have Elasticsearch and Logstash installed on separate machines so the hostname would be different.

Next, run Logstash with this configuration:

You should see JSON output in your terminal indicating Logstash is performing as expected.

Refresh Kibana in your browser, and you’ll notice that the Create button is now green, meaning Kibana has found an Elasticsearch index. Click it to create the index and select the Discover tab.

Your logs will now begin to appear in Kibana:

elk stack google cloud

Last, but Not Least

Installing ELK on GCP was smooth going — even easy — compared to AWS. Of course, as my goal was only to test installation and establish an initial pipeline. I didn’t stretch the stack to its limits. Logstash and Elasticsearch can cave under heavy loads, and the challenge, of course, is scaling and maintaining the stack on the long run. In a future post, I will compare the performance of ELK on GCP versus AWS.

Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.

elk ,elk stack ,google cloud

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}