DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Telemetry Pipelines Workshop: Installing Fluent Bit in Container
  • Tracking Changes in MongoDB With Scala and Akka
  • Docker Model Runner: Streamlining AI Deployment for Developers
  • A Guide to Container Runtimes

Trending

  • Mastering Fluent Bit: Installing and Configuring Fluent Bit Using Container Images (Part 2)
  • APIs for Logistics Orchestration: Designing for Compliance, Exceptions, and Edge Cases
  • Why Rate Limiting Matters in Istio and How to Implement It
  • It’s Not About Control — It’s About Collaboration Between Architecture and Security
  1. DZone
  2. Software Design and Architecture
  3. Containers
  4. Akka Monitoring: Telemetry OpenTracing

Akka Monitoring: Telemetry OpenTracing

This look at Lightbend's OpenTracing displays how you can monitor and trace your Akka actors and visualize that data with Zipkin or Jaeger.

By 
Hugh McKee user avatar
Hugh McKee
·
Jul. 27, 17 · Tutorial
Likes (5)
Comment
Save
Tweet
Share
8.9K Views

Join the DZone community and get the full member experience.

Join For Free

In April 2017, Lightbend Telemetry version 2.4 was released. One of the most significant changes in this release is the addition of OpenTracing integration with support for Jaeger and Zipkin. OpenTracing is a “vendor-neutral open standard for distributed tracing.” Being that Akka is, by its nature, a platform for building distributed systems, tracing is a much-needed and often-requested feature that is a valuable tool for Akka developers.

This monitoring tip focuses on the required changes that are necessary to install and configure tracing in an existing Akka project. In future tips, we will dive deeper into how to use tracing to track activity in Akka systems.

Project Setup

We are going to take an existing Akka project and make the necessary changes to enable Lightbend Telemetry with actor tracing.

We will be following the steps defined in the Lightbend Telemetry setup sbt documentation. The reason for this is that the sample project that we are using is built with SBT. Steps for setting up Maven and Gradle projects are also available.

The project used here can be found in the Lightbend Tech Hub/Get started with Lightbend technologies. The project is “Cluster Java.” Download this project if you wish to follow along. Another option is to take one of your sbt projects and follow along.

As instructed, edit the project/plugins.sbt file.

addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.3.8")
addSbtPlugin("com.dwijnand" % "sbt-dynver" % "1.1.1")


Add the lines as instructed in the documentation.

addSbtPlugin("com.lightbend.cinnamon" % "sbt-cinnamon" % "2.4.0")
credentials += Credentials(Path.userHome / ".lightbend" / "commercial.credentials")
resolvers += Resolver.url("lightbend-commercial",
  url("https://repo.lightbend.com/commercial-releases"))(Resolver.ivyStylePatterns)


Here is the complete plugins.sbt file with the initial first two lines followed by the added lines.

addSbtPlugin("com.typesafe.sbt" % "sbt-multi-jvm" % "0.3.8")
addSbtPlugin("com.dwijnand" % "sbt-dynver" % "1.1.1")

addSbtPlugin("com.lightbend.cinnamon" % "sbt-cinnamon" % "2.4.0")
credentials += Credentials(Path.userHome / ".lightbend" / "commercial.credentials")
resolvers += Resolver.url("lightbend-commercial",
  url("https://repo.lightbend.com/commercial-releases"))(Resolver.ivyStylePatterns)


Next, make the necessary changes to the build.sbt file. The file initially looked like this.

import com.typesafe.sbt.SbtMultiJvm.multiJvmSettings
import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm

val akkaVersion = "2.5.0"

val `akka-sample-cluster-java` = project
  .in(file("."))
  .settings(multiJvmSettings: _*)
  .settings(
    organization := "com.typesafe.akka.samples",
    scalaVersion := "2.12.1",
    scalacOptions in Compile ++= Seq("-deprecation", "-feature", "-unchecked", "-Xlog-reflective-calls", "-Xlint"),
    javacOptions in Compile ++= Seq("-Xlint:unchecked", "-Xlint:deprecation"),
    javacOptions in doc in Compile := Seq("-Xdoclint:none"),
    javaOptions in run ++= Seq("-Xms128m", "-Xmx1024m", "-Djava.library.path=./target/native"),
    libraryDependencies ++= Seq(
      "com.typesafe.akka" %% "akka-actor" % akkaVersion,
      "com.typesafe.akka" %% "akka-remote" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster-metrics" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster-tools" % akkaVersion,
      "com.typesafe.akka" %% "akka-multi-node-testkit" % akkaVersion,
      "org.scalatest" %% "scalatest" % "3.0.1" % Test,
      "io.kamon" % "sigar-loader" % "1.6.6-rev002"),
    fork in run := true,
    mainClass in (Compile, run) := Some("sample.cluster.simple.SimpleClusterApp"),
    // disable parallel tests
    parallelExecution in Test := false,
    licenses := Seq(("CC0", url("http://creativecommons.org/publicdomain/zero/1.0")))
  )
  .configs (MultiJvm)


After adding the monitoring settings, the file looks like this. Note that the new and modified lines all end with the “// Telemetry (Cinnamon)” comment.

import com.typesafe.sbt.SbtMultiJvm.multiJvmSettings
import com.typesafe.sbt.SbtMultiJvm.MultiJvmKeys.MultiJvm

val akkaVersion = "2.5.0"

val `akka-sample-cluster-java` = project
  .in(file("."))
  .enablePlugins(Cinnamon) // Telemetry (Cinnamon)
  .settings(multiJvmSettings: _*)
  .settings(
    cinnamon in run := true, // Telemetry (Cinnamon)
    cinnamon in test := true, // Telemetry (Cinnamon)
    cinnamonLogLevel := "INFO", // Telemetry (Cinnamon)

    organization := "com.typesafe.akka.samples",
    scalaVersion := "2.12.1",
    scalacOptions in Compile ++= Seq("-deprecation", "-feature", "-unchecked", "-Xlog-reflective-calls", "-Xlint"),
    javacOptions in Compile ++= Seq("-Xlint:unchecked", "-Xlint:deprecation"),
    javacOptions in doc in Compile := Seq("-Xdoclint:none"),
    javaOptions in run ++= Seq("-Xms128m", "-Xmx1024m", "-Djava.library.path=./target/native"),

    libraryDependencies += Cinnamon.library.cinnamonCHMetrics, // Telemetry (Cinnamon)
    libraryDependencies += Cinnamon.library.cinnamonAkka, // Telemetry (Cinnamon)
    libraryDependencies += Cinnamon.library.cinnamonOpenTracingJaeger, // Telemetry (Cinnamon)
    libraryDependencies += Cinnamon.library.cinnamonOpenTracingZipkin, // Telemetry (Cinnamon)

    libraryDependencies ++= Seq(
      "com.typesafe.akka" %% "akka-actor" % akkaVersion,
      "com.typesafe.akka" %% "akka-remote" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster-metrics" % akkaVersion,
      "com.typesafe.akka" %% "akka-cluster-tools" % akkaVersion,
      "com.typesafe.akka" %% "akka-multi-node-testkit" % akkaVersion,
      "org.scalatest" %% "scalatest" % "3.0.1" % Test,
      //      "io.kamon" % "sigar-loader" % "1.6.6-rev002"), // Telemetry (Cinnamon)
      "org.slf4j" % "slf4j-api"       % "1.7.25", // Telemetry (Cinnamon)
      "org.slf4j" % "jcl-over-slf4j"  % "1.7.25"), // Telemetry (Cinnamon)
    fork in run := true,
    mainClass in (Compile, run) := Some("sample.cluster.simple.SimpleClusterApp"),
    // disable parallel tests
    parallelExecution in Test := false,
    licenses := Seq(("CC0", url("http://creativecommons.org/publicdomain/zero/1.0")))
  )
  .configs (MultiJvm)


The changes made to this build.sbt are somewhat different than what is shown in the documentation, but they are equivalent.

Now that the sbt changes have been completed, the next thing that needs to be configured is the Akka configuration settings. The initial application.conf, before making any monitoring changes, is as follows.

akka {
    actor {
        provider = "cluster"
    }
    remote {
        log-remote-lifecycle-events = off
        netty.tcp {
            hostname = "127.0.0.1"
            port = 0
        }
    }

    cluster {
        seed-nodes = [
            "akka.tcp://ClusterSystem@127.0.0.1:2551",
            "akka.tcp://ClusterSystem@127.0.0.1:2552"]

        # auto downing is NOT safe for production deployments.
        # you may want to use it during development, read more about it in the docs.
        auto-down-unreachable-after = 10s
    }
}

# Disable legacy metrics in akka-cluster.
Akka.cluster.metrics.enabled = off

# Enable metrics extension in akka-cluster-metrics.
Akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension"]

# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host. 
Akka.cluster.metrics.native-library-extract-folder = ${user.dir}/target/native<


Let’s make some minimal changes to get actor tracing working. First, there are some required setting for Actor configuration. The following is the complete application.conf file including the monitoring configuration settings added to the end of the file.

akka {
    actor {
        provider = "cluster"
    }
    remote {
        log-remote-lifecycle-events = off
        netty.tcp {
            hostname = "127.0.0.1"
            port = 0
        }
    }

    cluster {
        seed-nodes = [
            "akka.tcp://ClusterSystem@127.0.0.1:2551",
            "akka.tcp://ClusterSystem@127.0.0.1:2552"]

        # auto downing is NOT safe for production deployments.
        # you may want to use it during development, read more about it in the docs.
        auto-down-unreachable-after = 10s
    }
}

# Disable legacy metrics in akka-cluster.
akka.cluster.metrics.enabled=off

# Enable metrics extension in akka-cluster-metrics.
akka.extensions=["akka.cluster.metrics.ClusterMetricsExtension"]

# Sigar native library extract location during tests.
# Note: use per-jvm-instance folder when running multiple jvm on one host. 
akka.cluster.metrics.native-library-extract-folder=${user.dir}/target/native

##### Monitoring Configuration Settings #####

# Actor configuration

cinnamon.akka {
    actors {
        "samle.cluster.*" {
            report-by = class
            traceable = on
        }
        "/user/*" {
            report-by = class
            traceable = on
        }
    }
}


Note that the cinnamon.akka.actors settings set traceable = on. This enables actor tracing. That’s it! Of course, there are other configuration settings available, but this is enough to trace actor activity. At this point, we should be able to run the project and look at some tracing data.

Tracing Akka Actors

Now that sbt and Akka are configured, it is time to run some things to view some tracing data. If you are following along using the akka-samples-cluster-java project, please review the README file. This is an interesting project that showcases various Akka clustering features. For this initial tracing test, we will run the Router Example with Group of Routees processes.

Before we run anything, we are first going to start up Jaeger and Zipkin so that we can view the trace data in their respective UIs. Fortunately, both Jaeger and Zipkin provide Docker images that make it easy to use these components. The documentation provides all of the details. Here, we cut to the chase and just run the two Docker instances.

docker run -d -p5775:5775/udp -p16686:16686 jaegertracing/all-in-one:latest

docker run -d -p 9411:9411 openzipkin/zipkin


The docker ps command shows the running instances and the TCP ports for their UIs.

$ docker ps
CONTAINER ID        IMAGE                             COMMAND                  CREATED             STATUS              PORTS                                                                       NAMES
717712f95641        openzipkin/zipkin                 "/bin/sh -c 'test ..."   26 hours ago        Up 4 seconds        9410/tcp, 0.0.0.0:9411->9411/tcp                                            gallant_payne
a46c70f06192        jaegertracing/all-in-one:latest   "/go/bin/standalon..."   28 hours ago        Up 28 hours         0.0.0.0:5775->5775/udp, 5778/tcp, 6831-6832/udp, 0.0.0.0:16686->16686/tcp   awesome_kirch


Note the Zipkin TCP port is 9411 and the Jaeger TCP port is 16686. Access the Jaeger UI at http://localhost:16686 and the Zipkin UI at http://localhost:9411. There will not be anything to see until we run something, so let’s do that next.

The akka-samples-cluster-java project README file provides instructions for running various Akka clustering examples. We are going to run some tests with Router Example with Group of Routees. There are two approaches for running the cluster. You can either run one program that starts up the cluster or start each Akka cluster node separately. Since we are mainly interested in tracing Akka actors, it does not matter how you run the cluster. That said, it is more interesting to run each cluster node separately in different terminal windows.

To run everything with a single command, cd into the project directory and enter the command:

sbt "runMain sample.cluster.stats.StatsSampleMain"


To run each cluster node separately open four terminal windows, in each terminal window cd into the project directory and enter the following commands, one in each terminal window:

sbt "runMain sample.cluster.stats.StatsSampleMain 2551"


sbt "runMain sample.cluster.stats.StatsSampleMain 2552"


sbt "runMain sample.cluster.stats.StatsSampleClientMain"


sbt "runMain sample.cluster.stats.StatsSampleMain 0"


Once the Akka cluster has been running for a few minutes, go take a look at the Jaeger UI and Zipkin UI. Each UI is similar. First, select the service, which, in this case, defaults to the main class name.

image alt text

Jager UI example

image alt text

Zipkin UI example

Shown above are screenshots of the Jaeger UI and the Zipkin UI.

So that is it! A few changes to the build configuration, plus a few changes to the Akka configuration, and Akka actor tracing is enabled. No code changes. No annotations. Just configuration changes.

At this point, you are ready to explore the tracing data via either or both UIs. There are also other configuration settings available that you can use to fine tune the monitoring settings. Give it a try and learn more about how you can use this very useful and new monitoring telemetry feature.

In future articles, we will explore tracing and telemetry in more detail. We will also explore monitoring with the exciting new advanced monitoring solution OpsClarity.

Akka (toolkit) Telemetry Docker (software) clustering

Published at DZone with permission of Hugh McKee, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Telemetry Pipelines Workshop: Installing Fluent Bit in Container
  • Tracking Changes in MongoDB With Scala and Akka
  • Docker Model Runner: Streamlining AI Deployment for Developers
  • A Guide to Container Runtimes

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!