DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Auto-Scaling a Spring Boot Native App With Nomad
  • How to Use Shipa to Make Kubernetes Adoption Easier for Developers
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies
  • Common Performance Management Mistakes

Trending

  • Stateless vs Stateful Stream Processing With Kafka Streams and Apache Flink
  • Intro to RAG: Foundations of Retrieval Augmented Generation, Part 1
  • *You* Can Shape Trend Reports: Join DZone's Software Supply Chain Security Research
  • Power BI Embedded Analytics — Part 2: Power BI Embedded Overview
  1. DZone
  2. Software Design and Architecture
  3. Cloud Architecture
  4. Instrumenting a JavaScript Application for OpenTelemetry, Part 1: Setup

Instrumenting a JavaScript Application for OpenTelemetry, Part 1: Setup

This post looks at the first steps for instrumenting a JavaScript application to report OpenTelemetry metrics.

By 
Chris Ward user avatar
Chris Ward
DZone Core CORE ·
Updated Jun. 01, 22 · Tutorial
Likes (1)
Comment
Save
Tweet
Share
5.9K Views

Join the DZone community and get the full member experience.

Join For Free

A lot of what you read around the topic of Observability mentions the benefits and potential of analyzing data, but little about how you collect it. This process is called “instrumentation” and broadly involves collecting events in infrastructure and code that include metrics, logs, and traces. There are of course dozens of methods, frameworks, and tools to help you collect the events that are important to you, and this post begins a series looking at some of those. This post focuses on introductory concepts, setting up the dependencies needed, and generating some basic metrics. Later posts will take these concepts further.

An Introduction to Metrics Data

Different vendors and open source projects created their own ways to represent the event data they collect. While this remains true, there are increased efforts to create portable standards that everyone can use and add their features on top of but retain interoperability. The key project is OpenTelemetry from the Cloud Native Computing Foundation (CNCF). This blog series will use the OpenTelemetry specification and SDKs, but collect and export a variety of the formats it handles.

The Application Example

The example for this post is an ExpressJS application that serves API endpoints and exports Prometheus-compatible metrics. The tutorial starts by adding basic instrumentation and sending metrics to a Prometheus backend, then adds more, and adds the Chronosphere collector. You can find the full and final code on GitHub.

Install and Setup ExpressJS

ExpressJS provides a lot of boilerplate for creating a JavaScript application that serves HTTP endpoints, so it is a great starting point. Add it to a new project by following the install steps.

Create an app.js file and create the basic skeleton for the application:

 
const express = require("express");

const PORT = process.env.PORT || "3000";
const app = express();

app.get("/", (req, res) => {
  res.send("Hello World");
});

app.listen(parseInt(PORT, 10), () => {
  console.log(`Listening for requests on http://localhost:${PORT}`);
});


Running this now with node app.js starts a server on port 3000. If you visit localhost:3000 you see the message “Hello World” in the web browser.

Add Basic Metrics

This step uses the tutorial from the OpenTelemetry site as a basis with some changes and builds upon it in later steps.

Install the dependencies the project needs, which are the Prometheus exporter, and the base metrics SDK.

 
npm install --save @opentelemetry/sdk-metrics-base
npm install --save @opentelemetry/exporter-prometheus


Create a new monitoring.js file to handle the metrics functions and add the dependencies:

 
const { PrometheusExporter } = require('@opentelemetry/exporter-prometheus');
const { MeterProvider }  = require('@opentelemetry/sdk-metrics-base');


Create an instance of a MeterProvider that uses the Prometheus exporter. To prevent conflicts with ports, the exporter uses a different port. Typically Prometheus runs on port 9090, but as the Prometheus server runs on the same machine for this example, use port 9091 instead.

 
const meter = new MeterProvider({
  exporter: new PrometheusExporter({port: 9091}),
}).getMeter('prometheus');


Create the metric to manually track, which in this case is a counter of the number of visits to a page.

 
const requestCount = meter.createCounter("requests", {
  description: "Count all incoming requests",
  monotonic: true,
  labelKeys: ["metricOrigin"],
});


Create a Map of the values based on the route (which in this case, is only one) and create an exportable function that increments the count each time a route is requested.

In the app.js file, require the countAllRequests function, and add with Express’s .use middleware function, call it on every request.

 
const { countAllRequests } = require("./monitoring");
…
app.use(countAllRequests());


At this point, you can start Express and check that the application is emitting metrics. Run the command below and refresh localhost:3000 a couple of times.

 
node app.js


Open localhost:9091/metrics and you should see a list of the metrics emitted so far.

Install and Configure Prometheus

Install Prometheus and create a configuration file with the following content:

 
global:
  scrape_interval: 15s
# Scraping Prometheus itself
scrape_configs:
- job_name: 'prometheus'
  scrape_interval: 5s
  static_configs:
  - targets: ['localhost:9090']
  # Not needed when running with Kubernetes
- job_name: 'express'
  scrape_interval: 5s
  static_configs:
  - targets: ['localhost:9091']


Start Prometheus:

 
prometheus --config.file=prom-conf.yml


Start Express and refresh localhost:3000 a couple of times.

 
node app.js


Open the Prometheus UI at localhost:9090, enter requests_total into the search bar and you should see results.

Add Kubernetes to the Mix

So far, so good, but Prometheus is more useful when also monitoring the underlying infrastructure running an application, so the next step is to run Express and Prometheus on Kubernetes.

Create a Docker image

The express application needs a custom image, create a Dockerfile and add the following:

 
FROM node
 
WORKDIR /opt/ot-express

# install deps
COPY package.json /opt/ot-express
RUN npm install

# Setup workdir
COPY . /opt/ot-express

# run
EXPOSE 3000
CMD ["npm", "start"]


Build the image with:

 
docker build -t ot-express .


Download the Kubernetes definition file from the GitHub repo for this post.

A lot of the configuration is necessary to give Prometheus permission to scrape Kubernetes endpoints, the configuration more unique to this example is the following:

 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ot-express
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ot-express  
  template:
    metadata:
      labels:
        app: ot-express  
      annotations:
        prometheus.io/scrape: "true"
        prometheus.io/port: "9091"
    spec:
      containers: 
      - name: ot-express 
        image: ot-express
        imagePullPolicy: Never
        ports:
        - name: express-app
          containerPort: 3000
        - name: express-metrics
          containerPort: 9091
---
apiVersion: v1
kind: Service
metadata:
  name: ot-express
  labels:
    app: ot-express
spec:
  ports:
  - name: express-app
    port: 3000
    targetPort: express-app
  - name: express-metrics
    port: 9091
    targetPort: express-metrics
  selector:
    app: ot-express
  type: NodePort


This deployment uses annotations to inform Prometheus to scrape metrics from applications in the deployment, and exposes the express and Prometheus ports it uses.

Update the Prometheus configuration to include scraping metrics from Kubernetes-discovered endpoints. This means you can remove the previous Express job.

 
global:
  scrape_interval: 15s
scrape_configs:
- job_name: 'prometheus'
  scrape_interval: 5s
  static_configs:
  - targets: ['localhost:9090']
- job_name: 'kubernetes-service-endpoints'
  kubernetes_sd_configs:
  - role: endpoints
  relabel_configs:
  - action: labelmap
    regex: __meta_kubernetes_service_label_(.+)
  - source_labels: [__meta_kubernetes_namespace]
    action: replace
    target_label: kubernetes_namespace
  - source_labels: [__meta_kubernetes_service_name]
    action: replace
    target_label: kubernetes_name


Create a ConfigMap of the Prometheus configuration:

 
kubectl create configmap prometheus-config --from-file=prom-conf.yml


Send the Kubernetes declaration to the server with:

 
kubectl apply -f k8s-local.yml


Find the exposed URL and port for the Express service, and open and refresh the page a few times. Find the exposed URL and port for the Prometheus UI, enter requests_total into the search bar and you should see results.

Increasing Application Complexity

The demo application works and sends metrics when run on the host machine, Docker, or Kubernetes. But it’s not complex and doesn’t send that many useful metrics. While still not production-level complex, this example application from the ExpressJS website adds multiple routes and HTTP protocols.

Adding in the other code the demo application needs, update app.js to the following:

 
const express = require("express");
const { countAllRequests } = require("./monitoring");

const PORT = process.env.PORT || "3000";
const app = express();
app.use(countAllRequests());

function error(status, msg) {
  var err = new Error(msg);
  err.status = status;
  return err;
}

app.use('/api', function(req, res, next){
  var key = req.query['api-key'];

  if (!key) return next(error(400, 'api key required'));

  if (apiKeys.indexOf(key) === -1) return next(error(401, 'invalid api key'))

  req.key = key;
  next();
});

var apiKeys = ['foo', 'bar', 'baz'];

var repos = [
  { name: 'express', url: 'https://github.com/expressjs/express' },
  { name: 'stylus', url: 'https://github.com/learnboost/stylus' },
  { name: 'cluster', url: 'https://github.com/learnboost/cluster' }
];

var users = [
  { name: 'tobi' }
  , { name: 'loki' }
  , { name: 'jane' }
];

var userRepos = {
  tobi: [repos[0], repos[1]]
  , loki: [repos[1]]
  , jane: [repos[2]]
};

app.get('/api/users', function(req, res, next){
  res.send(users);
});

app.get('/api/repos', function(req, res, next){
  res.send(repos);
});

app.get('/api/user/:name/repos', function(req, res, next){
  var name = req.params.name;
  var user = userRepos[name];

  if (user) res.send(user);
  else next();
});

app.use(function(err, req, res, next){
  res.status(err.status || 500);
  res.send({ error: err.message });
});

app.use(function(req, res){
  res.status(404);
  res.send({ error: "Sorry, can't find that" })
});


app.listen(parseInt(PORT, 10), () => {
    console.log(`Listening for requests on http://localhost:${PORT}`);
  });


There are a lot of different routes to try (read the comments in the original code), but here are a couple (open them more than once):

  • http://localhost:3000/api
  • http://localhost:3000/api/users/?api-key=foo
  • http://localhost:3000/api/repos/?api-key=foo
  • http://localhost:3000/api/user/tobi/repos/?api-key=foo

Start the application with Docker as above, and everything works the same, but with more metrics scraped by Prometheus.

If you’re interested in scraping more Express-related metrics, you can try the express-prom-bundle package. If you do, you need to change the port in the Prometheus configuration, and the Docker and Kubernetes declarations to the Express port, i.e. “3000”. You also no longer need the monitoring.js file or the countAllRequests methods. Read the documentation for the package for more ways to customize it for generating metrics important to you.

Next Steps

This post showed you how to set up a JavaScript application to collect OpenTelemetry data using the Prometheus collector and send basic metrics data. Future posts will dig into the metrics and how to apply them to an application in more detail.

Cloud native computing Express JavaScript Kubernetes Open source app application Docker (software) Metric (unit) POST (HTTP)

Published at DZone with permission of Chris Ward, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Auto-Scaling a Spring Boot Native App With Nomad
  • How to Use Shipa to Make Kubernetes Adoption Easier for Developers
  • Tips for Managing Multi-Cluster Kubernetes Deployment With High Efficiencies
  • Common Performance Management Mistakes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!