Monitor Your Infrastructure With InfluxDB and Grafana on Kubernetes
Build your own enterprise-grade, open-source infrastructure monitoring on AWS EKS, InfluxDB, and Grafana.
Join the DZone community and get the full member experience.Join For Free
Monitoring your infrastructure and applications is a must-have if you play your game seriously. Overseeing your entire landscape, running servers, cloud spends, VMs, containers, and the applications inside are extremely valuable to avoid outages or to fix things quicker. We at Starschema rely on open-source tools like InfluxDB, Telegraf, Grafana, and Slack to collect, analyze, and react to events. In this blog series, I will show you how we built our monitoring infra to monitor our cloud infrastructure, applications like Tableau Server and Deltek Maconomy, and data pipelines in Airflow, among others.
In this part, we will build up the basic infrastructure monitoring with InfluxDB, Telegraf, and Grafana on Amazon’s managed Kubernetes service: AWS EKS.
Create a New EKS Kubernetes Cluster
If you have an EKS cluster already, just skip this part.
I assume you have a properly set up
aws cli on your computer — if not, then please do it, it will be a life-changer. Anyway, first, install
eksctl which will help you to manage your AWS Elastic Kubernetes Service clusters and will save tons of time by not requiring to rely on the AWS Management Console. Also, you will need
First, create new a Kubernetes cluster in AWS using
eksctl without a nodegroup:
eu-central-1 region, but you can pick another one that is closer to you. After the command completes, add a new nodegroup to the freshly created cluster that uses only one availability zone (AZ):
The reason why I created a single AZ nodegroup is to be able to use EBS backed persistent volumes along with EC2 autoscaling groups. On multi-AZ node groups with autoscaling, newly created nodes can be in a different zone, without access to the existing persistent volumes (which are AZ specific). More info about this here.
TL;DR: Use single-zone nodegroups if you have EBS PersistentVolumeClaims.
If things are fine, you should see a node in your cluster:
Create a Namespace for Monitoring Apps
Kubernetes namespaces are isolated units inside the cluster. To create our own
monitoring namespace we should simply execute:
For our convenience, let’s use the
monitoring namespace as the default one:
Install InfluxDB on Kubernetes
Influx is a time-series database, with easy to use APIs and good performance. If you are not familiar with time-series databases, it is time to learn: they support special query languages designed to work with time-series data, or neat functions like downsampling and retention.
To install an application to our Kubernetes system, usually we
- (Optional) Create the necessary secrets as an Opaque
Secret(to store sensitive configurations).
- (Optional) Create a
ConfigMapto store non-sensitive configurations.
- (Optional) Create a
PersistentVolumeClaimto store any persistent data (think of volumes for your containers).
- Create a
DaemonSetfile to specify the container-related stuff like whatwe are going to run.
- (Optional) Create a
Servicefile explaining howwe are going to access the
As stated, the first thing we need to do is to define our
Secrets: usernames and passwords we want to use for our database.
Next, create some persistent storage to store the database itself:
If you are new to Kubernetes, the way to execute these files is to call
kubectl apply -f <filename> , in our case
kubectl apply -f influxdb-pvc.yml.
Now, let’s create the
Deployment, that defines what containers we need and how:
It will create a single pod (since
replicas=1), passing our
influxdb-creds as environmental variables and
influxdb-pvc PersistentVolumeClaim to obtain 5GB storage for the database files. If all good, we should see something like:
After we defined what we want to run, it’s time for how to access it? This where
Service definition comes into the picture. Let’s start with a basic
It tells that our pod’s 8088 port should be available thru an Elastic Load Balancer (ELB). With
kubectl get service, we should see the external-facing host:port (assuming we want to monitor apps outside from our AWS internal network).
This is great, but instead of
HTTP, we might want to use
HTTPS. To do that, we need our SSL certification in ACM with the desired hostname. We can either do it by generating a new certificate (requires Route53 hosted zones) or upload our external SSL certificate.
If we have our certificate in ACM, we should add it to the
After executing this file, we can see that our ELB listens on two ports:
SSL is properly configured, the only thing is missing to add an
CNAME record pointing to
We all set, our database is running, and it is available on both HTTP and HTTPS protocols.
Installing Telegraf on Kubernetes
We need some data to validate our installation, and by the way, we already have a system to monitor: our very own Kube cluster and its containers. To do this, we will install Telegraf on all nodes and ingest cpu, IO, docker metrics into our InfluxDB. Telegraf has tons of plugins to collect data from almost everything: infrastructure elements, log files, web apps, and so on.
The configuration will be stored as
ConfigMap, this is what we are going to pass to our containers:
To run our Telegraf data collector on all nodes of our Kubernetes cluster, we should use
DaemonSet instead of
Please note that this will use the same
influxdb-creds secret definition to connect to our database. If all good, we should see our telegraf agent running:
To check the log messages from the telegraf pod, simply execute
kubectl logs <podname>. You should not see any error messages.
Set Up Grafana in Kubernetes
This will be the fun part. Finally, we should be able to see some of the data we collected (and remember, we will add everything). Grafana is a cool, full-featured data visualization for time-series datasets.
Let’s start with the usual username and password combo as a secret.
Add 1GB storage to store the dashboards:
Define the deployment. As Grafana docker runs as 472 uid:gid, we have to mount the persistent volume with
Finally, let’s expose it in the same way we did with InfluxDB:
Voila, we should have our Grafana up and running. Let’s check the ELB address with
kubectl get services , point a nice hostname to its hostname/IP, and we are good to go. If all set, we should see something like:
Use the username/password combination you defined earlier, and see the magic.
Define Database Connection to InfluxDB
Why this can be done programatically, to keep this post short (it’s already way too long), let’s do it from the UI. Click on the gear icon, data source, Add data source:
http://influxdb:8066/ as URL, and set up your
readonly influxdb user.
Adding our First Grafana Dashboard
Our telegraf agent is loading some data, there is no reason to not look at it. We can import existing, community-built dashboards such as this one: https://grafana.com/grafana/dashboards/928.
Click on the
+ sign on the side bar, then Import. In the import screen add the number of this dashboard (928).
After importing, we should immediately see our previously collected data, in live:
Feel free to start building your own dashboards, it is way easier than you think.
In the next blog, I will show how to monitor our (and our customers‘) Tableau Server and how to set up data-driven email/Slack alerts in no time.
Published at DZone with permission of Tamas Foldi. See the original article here.
Opinions expressed by DZone contributors are their own.