{{announcement.body}}
{{announcement.title}}

Integrating Azure Monitor Metrics to Prometheus Time-Series Database With Azure Exporter

DZone 's Guide to

Integrating Azure Monitor Metrics to Prometheus Time-Series Database With Azure Exporter

In this article, see how to integrate Azure monitor metrics to Prometheus time-series database with Azure exporter.

· Database Zone ·
Free Resource

The use case is to get the Azure resource metrics and to save it in local DB. In this case, as this is time-series data, we can expect large chunks of data per minute. For time series storage, the first option that came to mind is Prometheus.

Below are the steps we have to do to get Azure resource metrics to our Prometheus database.

  1. Installation of Prometheus
  2. Setting up Azure metrics exporter
  3. Configuring .yaml files for Prometheus/Azure exporter

In a nutshell, we have to generate an endpoint where it lists all the resource metrics like in the below format. Then we have to configure the endpoint in the Prometheus configuration yaml file.

JSON
 




xxxxxxxxxx
1


1
metricname metricsvalue timestamp


Note: In LINUX based system we are doing this process.

Installation of Prometheus:

GitHub Flavored Markdown
 




xxxxxxxxxx
1


 
1
    wget https://github.com/prometheus/prometheus/releases/download/v2.13.0/prometheus-2.13.0.linux-amd64.tar.gz
2
 
          
3
    extract tar using tar



Installation of Azure Metrics Exporter:

Azure exporter requires GO Lang to be installed. So we need to install GO in the system.

Installation of GO.

GitHub Flavored Markdown
 




xxxxxxxxxx
1


1
curl -O https://storage.googleapis.com/golang/go1.12.9.linux-amd64.tar.gz   
2
        sha256sum go1.12.9.linux-amd64.tar.gz
3
        tar go1.12.9.linux-amd64.tar.gz
4
        Move to local directory: sudo mv go /usr/local
5
        Set below paths in the end of the profile 
6
            export GOPATH=$HOME/go
7
            export PATH=$PATH:/usr/local/go/bin:$GOPATH/bin
8
        Reload path: source ~/.profile



Installing Azure Metric Exporter Plugin:

We are using the exporter from https://github.com/RobustPerception/azure_metrics_exporter

 go get -u github.com/RobustPerception/azure_metrics_exporter 

In the bin folder of azure go/bin: create below file azure.yml. 

YAML
 




xxxxxxxxxx
1
24


 
1
active_directory_authority_url: "https://login.microsoftonline.com/"
2
resource_manager_url: "https://management.azure.com/"
3
credentials:
4
    subscription_id: ""
5
    client_id: ""
6
    client_secret: ""
7
    tenant_id: ""
8
 
          
9
targets:
10
resource_groups:
11
  - resource_group: "group-name"
12
    resource_types:
13
      - "Microsoft.Compute/virtualMachines"
14
    metrics:
15
      - name: "CPU Credits consumed"
16
      - name: "Percentage CPU"
17
      - name: "Network In Total"
18
      - name: "Network Out Total"
19
      - name: "Disk Read Bytes"
20
      - name: "Disk Write Bytes"
21
      - name: "Disk Read Operations/Sec"
22
      - name: "Disk Write Operations/Sec"
23
      - name: "CPU Credits Remaining"
24
 
          



Note: We have to create an application in the Azure portal before doing this. After creating the application, we will get a client secret and client id. We can configure YAML to get a given resource group of resources or a given resource type metrics to fetch.

Start the Exporter service after the above configuration, go/bin# ./azure_metrics_exporter  

It will enable metrics endpoint at 9276 port. To check, navigate to  http://localhost:9276/metrics 

You will get a list of metrics with values and time stamp in the above URL response.

Now the above endpoint to be sourced to Prometheus DB. This can be done in the Prometheus configuration file. Create a new yaml file in the Prometheus installation directory as shown below.

prometheus-azure-metric.yml

YAML
 




x


 
1
    global:
2
  scrape_interval:     1m # By default, scrape targets every 15 seconds.
3
 
          
4
  # Attach these labels to any time series or alerts when communicating with
5
  # external systems (federation, remote storage, Alertmanager).
6
  #external_labels:
7
  #  monitor: 'codelab-monitor'
8
 
          
9
# A scrape configuration containing exactly one endpoint to scrape:
10
# Here it's Prometheus itself.
11
scrape_configs:
12
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
13
  - job_name: 'azure'
14
 
          
15
    # Override the global default and scrape targets from this job every 5 seconds.
16
    scrape_interval: 1m
17
 
          
18
    static_configs:
19
      - targets: ['localhost:9276']



We have given scrape interval as 1m, as we are getting metrics in 1-minute scrape interval.

Start the Prometheus server with the new configuration file created above.

    /home/user/prometheus-2.13.0.linux-amd64# ./prometheus --config.file=prometheus-azure-metric.yml --web.listen-address=:9011 

Now the proemtheous running at port number 9011. Navigate to  http://localhost:9011  and check the metrics.

As we discussed earlier, all Prometheus needed is one endpoint where all the metrics will be available. That endpoint is localhost:9276, which we created using Azure exporter.

By default, Prometheus retains data till 15 days, which means it can hold history data for the last 15 days only. To make that customized, we have to provide an extra flag while starting Prometheus. 

--storage.tsdb.retention=365d   — you can provide any number of days.

Further extensions, we can also integrate the Prometheus with Grafana. Grafana is a metric dashboard interface. We can add Prometheus as a source to Grafana and see the metrics in Grafana.

Topics:
azure ,database ,integration ,metrics ,prometheus ,time series database ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}