DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
Securing Your Software Supply Chain with JFrog and Azure
Register Today

Trending

  • Is Podman a Drop-In Replacement for Docker?
  • Structured Logging
  • Microservices With Apache Camel and Quarkus
  • Understanding Dependencies...Visually!

Trending

  • Is Podman a Drop-In Replacement for Docker?
  • Structured Logging
  • Microservices With Apache Camel and Quarkus
  • Understanding Dependencies...Visually!
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Monitoring and Observability
  4. Prometheus AWS Exporter and Grafana

Prometheus AWS Exporter and Grafana

The main purpose of this article and use case is to scrape AWS cloud watch metrics into the Prometheus time series and to visualize the metrics data in Grafana.

Shiva Jayam user avatar by
Shiva Jayam
·
May. 01, 23 · Tutorial
Like (4)
Save
Tweet
Share
3.38K Views

Join the DZone community and get the full member experience.

Join For Free

The main purpose of this article and use case is to scrape AWS CloudWatch metrics into the Prometheus time series and to visualize the metrics data in Grafana. Prometheus and Grafana are the most powerful, robust open-source tools for monitoring, collecting, visualizing, and performance metrics of the deployed applications in production. These tools give greater visibility other than collecting the metrics also, where we can set up critical alerts, live views, and custom dashboards. CloudWatch Exporter is an open-source tool to capture metrics as defined in yml configuration file. 

Architecture

AWS architectureThe CloudWatch Exporter will collect the metrics from AWS Cloud watch every 15 seconds (default), and it will expose them as key/value pairs in /the metrics API response. Using that configuration, the exporter will collect those metrics from CloudWatch every 15 seconds (default) and expose them as key-value pairs in the '/metrics' API response. The CloudWatchExporter's /metrics endpoint should then be added to the Prometheus configuration as a scrape job. Prometheus allows us to define the scraping frequency, so we can adjust the frequency of calls to CloudWatch to eventually tune the cost.

Setup Instructions

AWS Access Setup

  • Set up a new user under IamUser. Assign CloudWatchReadOnly permissions to that user.
  • Generate a key and secret access key for that user. Guard the secret with your life as you cannot see it ever again! Save it safe, where you can find it.

Grafana Setup

  • Install Grafana mac: brew install Grafana
  • Start Grafana: brew services start Grafana
  • Access Grafana
  • Add Prometheus as a data source

Add Prometheus as a data source

AWS Cloud Watch Exporter

How to run a CloudWatch exporter locally:

Establish an AWS session for the exporter. Passing in the key and access can be done more elegantly.

$ aws configure AWS Access Key ID [********************]: enter_your_access_key_here AWS Secret Access Key [********************]: enter_your_secret_key_hereDefault region name [eu-west-1]:

Default output format [None]:

Running the Exporter:

AWS Exporter Git Hub

Local Path to AWS Exporter:

cd /Users/jayam000/cw/prometheus-2.36.1.linux-amd64/cloudwatchexporters 

Run the exporter: java -jar cloudwatch_exporter-0.6.0-jar-with-dependencies.jar 1234 cloudwatchmonconfig.yml 

Sample yml file to capture the request counts on AWS. The original file used can be found as an attachment.

YAML
 
---
region: eu-west-1
metrics:
- aws_namespace: AWS/ELB
  aws_metric_name: HealthyHostCount
  aws_dimensions: [AvailabilityZone, LoadBalancerName]
  aws_statistics: [Average]

- aws_namespace: AWS/ELB
  aws_metric_name: UnHealthyHostCount
  aws_dimensions: [AvailabilityZone, LoadBalancerName]
  aws_statistics: [Average]

- aws_namespace: AWS/ELB
  aws_metric_name: RequestCount
  aws_dimensions: [AvailabilityZone, LoadBalancerName]
  aws_statistics: [Sum]

- aws_namespace: AWS/ELB
  aws_metric_name: Latency
  aws_dimensions: [AvailabilityZone, LoadBalancerName]
  aws_statistics: [Average]

- aws_namespace: AWS/ELB
  aws_metric_name: SurgeQueueLength
  aws_dimensions: [AvailabilityZone, LoadBalancerName]
  aws_statistics: [Maximum, Sum]

- aws_namespace: AWS/ElastiCache
  aws_metric_name: CPUUtilization
  aws_dimensions: [CacheClusterId]
  aws_statistics: [Average]

- aws_namespace: AWS/ElastiCache
  aws_metric_name: NetworkBytesIn
  aws_dimensions: [CacheClusterId]
  aws_statistics: [Average]

- aws_namespace: AWS/ElastiCache
  aws_metric_name: NetworkBytesOut
  aws_dimensions: [CacheClusterId]
  aws_statistics: [Average]

- aws_namespace: AWS/ElastiCache
  aws_metric_name: FreeableMemory
  aws_dimensions: [CacheClusterId]
  aws_statistics: [Average]


Success! You should now be able to access the CloudWatch metrics here.

CloudWatch Metrics

Prometheus (Using Docker)

Command to run Prometheus via docker:

docker run -p 9090:9090 -v /Users/jayam000/cw/prometheus-2.36.1.linux-amd64/prometheus.yml:/etc/prometheus/prometheus.yml  prom/prometheus

Setup instructions to add AWS cloud watch metrics scrape config into Prometheus:

Edit prometheus.yml to include the below configuration:

 - job_name: "cloudwatch"

   static_configs:

     - targets: ["host.docker.internal:1234"]

Prometheus scrape configuration file.

YAML
 
# my global config
global:
  scrape_interval: 15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
    - static_configs:
        - targets:
          # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: "prometheus"
    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.
    static_configs:
      - targets: ["localhost:9090"]
  - job_name: "cloudwatch"
    static_configs:
      - targets: ["host.docker.internal:1234"]
#    scrape_interval: 3m
#    #scrape_timeout: 30s


Note: Had to change from localhost:1234 to host.docker.internal:1234 due to a bizarre connectivity issue. After the change, started seeing Prometheus collect metrics.

 Default scrape interval: 15 secs.

Access Prometheus here.

Click on Graph and select a specific parameter to view the time series corresponding to that attribute.

Click on Status/Targets to check the health of the data sources added.

Screenshot of Prometheus showing the configured targets

Screenshot of Prometheus showing the configured targets, in this case, cloud watch exporter

Grafana dashboard

Prometheus graph showing the number of cloud watch requests. This data is similar to the data displayed in the Grafana dashboard.

Grafana Dashboard

Grafana dashboard of cloud watch metrics from Prometheus.
Grafana dashboard of cloud watch metrics from Prometheus.

Conclusion

The given sample version extracts availability metrics from AWS. It does not extract application metrics or logs. This given sample did not require any custom code to be built. Adding new data source into Prometheus required yml config update and service restart. Incorporating new metrics into cloud watch.

Feel free to reach out should you have any questions around the setup, and we would be happy to assist you.

AWS Grafana AWS Cloud

Opinions expressed by DZone contributors are their own.

Trending

  • Is Podman a Drop-In Replacement for Docker?
  • Structured Logging
  • Microservices With Apache Camel and Quarkus
  • Understanding Dependencies...Visually!

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com

Let's be friends: