DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Unified Observability: Metrics, Logs, and Tracing of App and Database Tiers in a Single Grafana Console
  • [CSF] Using Metrics In Spring Boot Services With Prometheus, Graphana, Instana, and Google cAdvisor
  • Managed MQTT Broker Comparison — Console/Dashboard Features
  • Manage Microservices With Docker Compose

Trending

  • Accelerating AI Inference With TensorRT
  • Scalable, Resilient Data Orchestration: The Power of Intelligent Systems
  • Unmasking Entity-Based Data Masking: Best Practices 2025
  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  1. DZone
  2. Data Engineering
  3. Databases
  4. How To Set Up Monitoring Using Prometheus and Grafana

How To Set Up Monitoring Using Prometheus and Grafana

Monitoring our microservices is as important as its development. In this article, we see how you can monitor your microservices using Prometheus and Grafana.

By 
Noorain Panjwani user avatar
Noorain Panjwani
·
Feb. 25, 21 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
9.3K Views

Join the DZone community and get the full member experience.

Join For Free

Introduction

So you have finally written and deployed your first microservice? Or maybe you have decided to embark on the microservices adventure to future-proof yourself? Either way, congratulations!

It’s time to take the next step. It's time to set up monitoring!

Why Setting Up Monitoring Is Important

Monitoring is super important. It's the part of your system that lets you know what’s going on in your app. And it isn’t just a dashboard with fancy charts.

Monitoring is the systematic process of aggregating actionable metrics and logs.

The keyword here is actionable. You are collecting all these metrics and logs to make decisions based on them.

For example, you would want to collect the health of your VMs & microservices to ensure you have enough healthy capacity to service user requests. You’d also like to trigger emails & notifications in case failures go below a certain threshold.

This is what monitoring helps you achieve. This is why you need to set up monitoring.

As it's clearly evident, your monitoring stack is a source from which several processes can be automated. So it's really important to make sure your monitoring stack is reliable and that it can scale with your application.

Prometheus has become the go-to monitoring stack in recent times. Its novel pull-based architecture, along with its in-built support for alerting, makes it an ideal choice for a wide variety of workloads.

In this article, we’ll use Prometheus to set up monitoring. For visualizations, we’ll use Grafana.

This guide is available in video format as well. Feel free to refer to it.


I’ve set up a GitHub repository for you guys as well. Use it to reproduce everything we’ll be doing today.

Prometheus and Grafana

Prometheus is a metrics aggregator. Instead of your services pushing metrics to it, like in the case of a database, Prometheus pulls metrics from your services.

It expects services to make an endpoint by exposing all the metrics in a particular format. All we need to do is tell Prometheus the address of such services, and it will begin scraping them periodically.

Prometheus then stores these metrics for a configured duration and makes them available for querying. It pretty much acts like a time-series database in that regard.

Along with that, Prometheus has got first-class support for alerting using AlertManager. With AlertManager, you can send notifications via Slack, email, PagerDuty, and tons of other mediums when certain triggers go off.

All of this makes it super easy to set up monitoring using Prometheus.

While Prometheus and AlertManager provided a well-rounded monitoring solution, we need dashboards to visualize what’s going on within our cluster. That’s where Grafana comes in.

Grafana is a fantastic tool that can create outstanding visualizations. Grafana itself can’t store data, but you can hook it up to various sources to pull metrics from it, including Prometheus.

So the way it works is this: we use an aggregator like Prometheus to scrap all the metrics. Then we link Grafana to it to visualize the data in the form of dashboards.

Easy!

What We'll Be Monitoring

Microservices

We’ll be monitoring two microservices today. The first microservice is written by a math genius to perform insane calculations. By insane calculations, I mean adding two numbers. It’s a simple HTTP endpoint expecting two numbers in the request and responds back with the addition of those two numbers.

The second one is a polite greeter service that takes a name as a URL parameter and responds with a greeting.

Here’s what the endpoints look like:

Service URL Response
Math Service /add/:num1/:num2 { "value": SUM_OF_NUMBERS }
Greeter Service /greeting/:name { "greeting": "hello NAME" }

And here’s the link to the code:

  • Math Service: Written in Node.js
  • Greeter Service: Written in Golang

Both of these services will be running inside Docker. Neither one of them have any code related to monitoring whatsoever. In fact, they don’t even know that they are being monitored.

Metrics

We will be monitoring a couple of metrics. This is what the final dashboard will look like.

Final dashboard of metric monitor

First is the CPU & memory utilization of our services. Since all of our services will be running in Docker, we’ll use cAdvisor to collect these metrics.

cAdvisor is a really neat tool. It collects container metrics directly from the host and makes it available for Prometheus to scrape. You don’t really need to configure anything for it to work.

We’ll also be collecting HTTP metrics. These are also called l7 metrics.

What we are interested in is the requests coming in per second grouped by response status code. Using this metric alone, we can infer the error rates and total throughput of each service.

Now we could modify our services to collect these metrics and make them available for Prometheus to scrape. But that sounds like a lot of work. Instead, we will use a reverse proxy like HAProxy in front of our services which can collect metrics on our behalf.

The reverse proxy plays two roles: it exposes our apps to the outside world, and it collects metrics while it’s at it. Unfortunately, HAProxy doesn’t export metrics in a Prometheus compatible format. Luckily the community has built an exporter which can do this for us.

We still have a small problem with this setup. HAproxy won’t capture metrics for direct service to service communication since that will bypass it altogether. We can solve this loophole entirely by using something we call a service mesh. You can read more about it in this article.

Our Final Monitoring Setup

Our final monitoring setup using Prometheus and Grafana will look something like this:

Final monitoring setup

Deploying Our Monitoring Stack

Finally, it’s time to get our hands dirty.

To get our services up, we’ll write a docker-compose file. Docker-compose is an awesome way to describe all the containers we need in a single YAML file. Again, all the resources can be found in this GitHub repo.

YAML
 




x
81


 
1
version: '3'
2

          
3
services:
4
  ###############################################################
5
  #                Our core monitoring stack                    #
6
  ############################################################### 
7
  prometheus:
8
    image: prom/prometheus
9
    ports:
10
    - 9090:9090                                       # Prometheus listens on port 9090
11
    volumes:
12
    - ./prometheus.yml:/etc/prometheus/prometheus.yml # We mount a custom prometheus config
13
                                                      # file to scrap cAdvisor and HAProxy
14
    deploy:
15
      resources:
16
        limits:
17
          cpus: '1'
18
          memory: 512M
19

          
20
  grafana:                                            # Garafana needs no config file since
21
    image: grafana/grafana                            # we configure it once it's up
22
    ports:
23
    - 3000:3000                                       # Grafana listens on port 3000
24
    depends_on: [prometheus]                            
25

          
26
  ###############################################################
27
  #            Agent to collect runtime metrics                 #
28
  ############################################################### 
29
  cadvisor:
30
    image: google/cadvisor:latest
31
    container_name: cadvisor
32
    volumes:                                          # Don't ask me why I mounted all these
33
    - /:/rootfs:ro                                    # directories. I simply copied these
34
    - /var/run:/var/run:rw                            # mounts from the documentation.
35
    - /sys:/sys:ro
36
    - /var/lib/docker/:/var/lib/docker:ro
37
    deploy:
38
      resources:
39
        limits:
40
          cpus: '1'
41
          memory: 512M
42

          
43
  ###############################################################
44
  #                          HA proxy                           #
45
  ###############################################################        
46
  haproxy:                                             # We are using HAProxy as our reverse
47
    image: haproxy:2.3                                 # proxy here
48
    ports:
49
    - 11000:11000                                      # I've configured HAProxy to run on 11000
50
    volumes:
51
    - ./haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg # We mount a custom config file to proxy
52
    deploy:                                            # between both the services
53
      resources:
54
        limits:
55
          cpus: '0.5'
56
          memory: 512M
57
    depends_on: [svc-greeter, svc-math]
58

          
59
  haproxy-exporter:
60
    image: prom/haproxy-exporter                       # Need to point the exporter to haproxy
61
    command: '--haproxy.scrape-uri="http://haproxy:8404/stats;csv"'
62
    depends_on: [haproxy]
63

          
64
  ###############################################################
65
  #                       Our Microservices                     #
66
  ###############################################################
67
  svc-greeter:                                         # These are our services. Nothing fancy
68
    image: spaceuptech/greeter
69
    deploy:
70
      resources:
71
        limits:
72
          cpus: '0.05'
73
          memory: 512M
74

          
75
  svc-math:
76
    image: spaceuptech/basic-service
77
    deploy:
78
      resources:
79
        limits:
80
          cpus: '0.05'
81
          memory: 512M



I have also gone ahead to make custom config files for Prometheus and HAProxy.

For Prometheus, we are setting up scrapping jobs for cAdvisor and HAProxy exporter. All we need to do is give Prometheus the host and port of the targets. If not provided explicitly, Prometheus fires HTTP requests on the /metrics endpoint to retrieve metrics.

For HAProxy, we configure one backend for each service. We will be splitting traffic between the two microservices based on the incoming request’s url. We’ll forward the request to the greeter service if the url begins with /greeting. Otherwise, we forward the request to the math service.

Bringing Our Services Online

Now that we have described all our services in a single docker-compose file, we can copy all of this on our VM, ssh into it, and then run docker-compose -p monitoring up -d.

That’s it. Docker will get everything up and running.

These are the ports all the exposed services will be listening to. Make sure these ports are accessible.

Service Port
Prometheus 9090
Grafana 3000
HAProxy 11000

We can check if Prometheus was configured correctly by visiting http://YOUR_IP:9090/targets. Both the targets (cAdvisor and HAProxy exported) should be healthy.

We can check if our proxy is configured properly by opening the following urls:

Service URL Expected Response
Math Service http://YOUR_IP:11000/add/1/2 { "value": 3 }
Greeter Service http://YOUR_IP:11000/greeting/YourTechBud { "greeting": "hello YourTechBud" }

Setting Up Our Monitoring Dashboard

Then the last step remaining would be to configure Grafana. Visit http://YOUR_IP:3000 to open up Grafana. The default username and password will be admin.

Add Prometheus as a Data Source

All our monitoring metrics are being scrapped and stored in Prometheus. Hence, the first step is to add Prometheus as a datastore. To do that, select Add a datasource > select Prometheus from the dropdown > enter http://prometheus:9090 as the Prometheus url > select Save & Test.

That’s all we need to do to link Grafana with Prometheus.

Create Our Dashboard.

Creating a dashboard from scratch can take time. You can import the dashboard I’ve already made to speed things up. After hitting the Import Dashboard button in the Manage Dashboards section, simply copy and paste the JSON in the text area and select Prometheus as the data source.

That’s it!

Feel free to refer to this 10-minute video below if you get lost.


Conclusion

You’ve just set up monitoring using Prometheus and Grafana. You can edit a chart to see what I’ve done. You’ll see that all it takes to populate a chart is a Prometheus query.

Here’s one query as an example: sum(container_memory_usage_bytes{name=~\"monitoring_svc.*\"} / 1024 / 1024) by (name). This query calculates the total memory being consumed by our two microservices.

I agree that the Prometheus queries can be a bit overwhelming. But that doesn’t really matter. You can simply import this dashboard and expect it to just work. Use it as a boilerplate. Mess around with it. Enjoy!

Did this article help you? How do you make sure your apps are cloud-native? Share your experiences below.

Grafana microservice Metric (unit) Database Docker (software) file IO Dashboard (Mac OS) IT HAProxy

Published at DZone with permission of Noorain Panjwani, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Unified Observability: Metrics, Logs, and Tracing of App and Database Tiers in a Single Grafana Console
  • [CSF] Using Metrics In Spring Boot Services With Prometheus, Graphana, Instana, and Google cAdvisor
  • Managed MQTT Broker Comparison — Console/Dashboard Features
  • Manage Microservices With Docker Compose

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!