DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Data Engineering
  3. Databases
  4. OpenCensus Metrics and InfluxDB

OpenCensus Metrics and InfluxDB

Let's take a look at OpenCensus metrics as well as InfluxDB and instrumenting Go code with OpenCensus.

Noah Crowley user avatar by
Noah Crowley
·
Sep. 08, 18 · Tutorial
Like (2)
Save
Tweet
Share
6.24K Views

Join the DZone community and get the full member experience.

Join For Free

OpenCensus is the latest in a series of projects that have emerged from Google as a result of their decades of experience running "planet-scale" systems; it is a collection of libraries, implemented in a number of languages, designed to help developers increase the observability of their systems. OpenCensus provides APIs for gathering metrics and trace data, adding tags, and then sending that data to a backend for storage, as well as for the creation of zPages and HTTP endpoints, which expose additional information about the application in question.

Instrumentation is a key part of any observability strategy, and the OpenCensus project provides a standards-based approach to adding those features to your codebase. As developers and cloud providers throw their support behind OpenCensus, we wanted to ensure a seamless interaction with the rest of the TICK Stack.

Fortunately, OpenCensus embraces a number of other open source technologies and standards, which make that integration easy. The project provides a number of "exporters," the component that sends data to the back-end, including one for metrics which uses the Prometheus exposition format and one for traces that uses Zipkin.

The Prometheus exposition format is something that we've supported for some time now in various parts of the TICK Stack, including a Prometheus input plugin for our collection agent, Telegraf, which allows it to scrape metrics from Prometheus-style endpoints.

As a result, we can easily collect data from any applications that have been instrumented with OpenCensus by making a few configuration changes to the Telegraf configuration file and enabling the Prometheus plugin.

Let's dive in! This post assumes that you have Telegraf and InfluxDB already installed. If you don't, you can also check out the InfluxData Sandbox to quickly get up and running with Docker and Docker Compose.

Instrumenting Go Code With OpenCensus

First, we need some code to instrument. Fortunately, we can make use of the example provided in the OpenCensus Quickstart guides. The libraries for the Go programming language are probably the farthest along, so we'll use Go for this article.

The quickstart tutorial involves building a simple REPL that capitalizes any strings provided to it as input, while collecting metrics about the application. We'll use this example pretty much line-for-line, with one exception — we'll replace the StackDriver exporter with the Prometheus one.

Working through the tutorial, the first thing we do is create a simple REPL, followed by adding instrumentation code, including metrics and tags, and then recording those metrics. We create a few views and then set up the exporter. You can scroll down to the last snippet of code and then click the "All" tab to grab the full application.

We'll need to make a few changes to use the Prometheus exporter; fortunately, OpenCensus provides us some example code for that as part of the documentation on exporters. We'll do a bit of code surgery; replacing a few import statements and updating the initialization and registration of the exporter. I added the existing code to a GitHub Gist, then updated it so you can view the diff if you're interested.

Save the modified code and we can run the REPL:

go run repl.go

Enter a few lines into the REPL and then visit http://localhost:9091; you should see some metrics exposed in Prometheus format. As you enter new lines into the REPL, those metrics should change.

Collecting OpenCensus Metrics

As I mentioned earlier, Telegraf, the InfluxData collection agent, has built-in support for both scraping and exposing metrics in the Prometheus exposition format. All it takes to enable Prometheus metrics collection is to edit a few lines in the configuration file.

If you don't already have a config file, you can generate one with the following command:

telegraf --sample-config --input-filter prometheus --aggregator-filter --output-filter influxdb

The --sample-config argument creates a new config file, while the various filter options filter the configuration sections which will be added to the config file. To enable the collection of Prometheus metrics, we're using --input-filter prometheus, which results in an [[inputs.prometheus]] section being added to our config.

We need to edit that section and point it to the URL our local REPL is using as its metrics endpoint, http://localhost:9091/metrics. We can use all of the other defaults, as follows:

[[inputs.prometheus]]
  ## An array of urls to scrape metrics from.
  urls = ["http://localhost:9091/metrics"]

  ## An array of Kubernetes services to scrape metrics from.
  # kubernetes_services = ["http://my-service-dns.my-namespace:9100/metrics"]

  ## Use bearer token for authorization
  # bearer_token = /path/to/bearer/token

  ## Specify timeout duration for slower prometheus clients (default is 3s)
  # response_timeout = "3s"

  ## Optional TLS Config
  # tls_ca = /path/to/cafile
  # tls_cert = /path/to/certfile
  # tls_key = /path/to/keyfile
  ## Use TLS but skip chain & host verification
  # insecure_skip_verify = false

Because we're using all of the defaults at this point, Telegraf is writing to a telegraf database, collecting data every 10 seconds and flushing it to the outputs at the same interval. We're also running InfluxDB on the same machine, so the default InfluxDB output configuration should be sufficient.

Start Telegraf using the config file:

telegraf --config telegraf.conf

You should see metrics showing up in the telegraf database after the first collection interval has elapsed! Let's verify this using the influx command-line tool to execute a query.

Launch the CLI and use the telegraf database:

$ influx
Connected to http://localhost:8086 version 1.6.1
InfluxDB shell version: v1.6.1
> USE telegraf
Using database telegraf
>

Then, run a query to get the last 5m of data for the demo_demo_latency measurement:

> SELECT * FROM demo_demo_latency WHERE time > now()-5m

You should see a number of points, depending on how long your application has been running.

Next Steps

As you can see, it's extremely easy to collect metrics from an OpenCensus-instrumented application using Telegraf and the OpenCensus Prometheus exporter. If you already have an application that's been instrumented with OpenCensus, then you might want to consider installing Chronograf and adding a few dashboards or setting up Kapacitor and configuring alerts.

Metric (unit) InfluxDB application

Published at DZone with permission of Noah Crowley, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • OpenVPN With Radius and Multi-Factor Authentication
  • Microservices 101: Transactional Outbox and Inbox
  • How Elasticsearch Works
  • Journey to Event Driven, Part 1: Why Event-First Programming Changes Everything

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: