DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Related

  • Telemetry Pipelines Workshop: Parsing Multiple Events
  • Telemetry Pipelines Workshop: Installing Fluent Bit in Container
  • How Observability Is Redefining Developer Roles
  • Buildpacks: An Open-Source Alternative to Chainguard

Trending

  • Why Database Migrations Take Months and How to Speed Them Up
  • Build a Simple REST API Using Python Flask and SQLite (With Tests)
  • MySQL to PostgreSQL Database Migration: A Practical Case Study
  • Securing the Future: Best Practices for Privacy and Data Governance in LLMOps
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Monitoring and Observability
  4. Telemetry Pipelines Workshop: Routing Events With Fluent Bit

Telemetry Pipelines Workshop: Routing Events With Fluent Bit

This article walks through how to use the routing phase to process events to several output destinations using telemetry pipelines with Fluent Bit.

By 
Eric D.  Schabell user avatar
Eric D. Schabell
DZone Core CORE ·
Apr. 18, 24 · Tutorial
Likes (3)
Comment
Save
Tweet
Share
1.7K Views

Join the DZone community and get the full member experience.

Join For Free

This article is part of a series exploring a workshop guiding you through the open-source project Fluent Bit, what it is, a basic installation, and setting up the first telemetry pipeline project. Learn how to manage your cloud-native data from source to destination using the telemetry pipeline phases covering collection, aggregation, transformation, and forwarding from any source to any destination. 

In the previous article in this series, we explored the use case covering metric collection processing. In this article, we step back and look closer at how we can use the routing phase to process events to several output destinations. You can find more details in the accompanying workshop lab.

Let's get started with this use case.

Before we get started it's important to review the phases of a telemetry pipeline. In the diagram below we see them laid out again. Each incoming event goes from input to parser to filter to buffer to routing before being sent to its final output destination(s).

Phases of a telemetry pipeline

For clarity in this article, we'll split up the configuration into files that are imported into a main fluent bit configuration file we'll name workshop-fb.conf.

Routing to Multiple Destinations

For the next telemetry pipeline, we're going to expand basic routing to send incoming events to multiple destinations. First, we need to define several generated input events for the INPUT phase, which is done in our configuration file inputs.conf:

# This entry generates a test INFO log level message and tags 
# it for the workshop.
[INPUT]
  Name  dummy
  Tag   workshop.info
  Dummy {"message":"This is workshop INFO message", "level":"INFO", "color": "yellow"}

# This entry generates a test ERROR log level message and tags 
# it for the workshop.
[INPUT]
  Name  dummy
  Tag   workshop.err
  Dummy {"message":"This is workshop ERROR message", "level":"ERROR", "color": "red"}


Explore the dummy input plugin documentation for all the details, but this plugin generates fake events on set intervals, 1 second by default. There are three keys used to set up our inputs:

  • Name: The name of the plugin to be used
  • Tag: The tag we assign, can be anything, to help find events of this type in the matching phase
  • Dummy: Where the exact event output can be defined; By default it just sends { "message" : "dummy"}

Our configuration is tagging each INFO level event with workshop.info and ERROR level event with workshop.err. The configuration also overrides the default dummy message with custom event text.

A simple output configuration almost always includes just printing the events to our console over the standard output channel. We are going to expand this and add two new output destinations that routing will use to send events with a defined tag as follows using the File output plugin.

To do this, we'll configure our outputs.conf file using our favorite editor:

# This entry directs all tags (it matches any we encounter)
# to print to standard output, which is our console.
[OUTPUT]
  Name  stdout
  Match *

# This entry directs all INFO level events to its own log file.
[OUTPUT]
  Name  file
  File  /tmp/workshop-INFO.log
  Match *.info

# This entry directs all error level events to its own log file.
[OUTPUT]
  Name file
  File /tmp/workshop-ERROR.log
  Match *.error


There are three keys used to configure routing to new output destinations:

  • Name: The name of the plugin to be used; in this case, the File plugin.
  • File: The name of the file to which this output destination is to write
  • Match: The tag used for matching to determine if an event needs to go to this output destination

With our inputs, outputs, and routing configured, we set up our main configuration file, workshop-fb.conf, as follows:

# Fluent Bit main configuration file.
#
# Imports section.
@INCLUDE inputs.conf
@INCLUDE outputs.conf


To see if our event routing configuration works, we can test run it with our Fluent Bit installation, first using the source installation followed by the container version. Below the source install is shown from the directory we created to hold all our configuration files:

# source install.
#
$ [PATH_TO]/fluent-bit --config=workshop-fb.conf


The console output should look something like this, noting that we've cut out the ASCII logo at start-up:

...
[2024/03/04 15:50:19] [ info] [input:dummy:dummy.0] initializing
[2024/03/04 15:50:19] [ info] [input:dummy:dummy.0] storage_strategy='memory' (memory only)
[2024/03/04 15:50:19] [ info] [input:dummy:dummy.1] initializing
[2024/03/04 15:50:19] [ info] [input:dummy:dummy.1] storage_strategy='memory' (memory only)
[2024/03/04 15:50:19] [ info] [output:stdout:stdout.0] worker #0 started
[2024/03/04 15:50:19] [ info] [sp] stream processor started
[0] workshop.info: [[1709647524.730711742, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.error: [[1709647524.730841784, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.info: [[1709647525.732475405, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.error: [[1709647525.732539155, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.info: [[1709647526.729102845, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.error: [[1709647526.729131554, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.info: [[1709647527.733301264, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.error: [[1709647527.733388681, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.info: [[1709647528.733776381, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.error: [[1709647528.733822506, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
...


Note the same full output of alternating generated event lines with INFO and ERROR messages that run until exiting with CTRL_C. this is because we have defined all events to always be routed to the standard output or console.

The events are also tested by the routing configuration for sorting into two other possible destinations, files that collect one specific set of messages. Check these to confirm the error events and informational events have been properly sorted as follows:

$ cat /tmp/workshop-INFO.log

[0] workshop.info: [[1709647524.730711742, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709647525.732475405, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709647526.729102845, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709647527.733301264, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709647528.733776381, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
...


$ cat /tmp/workshop-ERROR.log

[0] workshop.error: [[1709647524.730841784, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709647525.732539155, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709647526.729131554, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709647527.733388681, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709647528.733822506, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
...


Let's now try testing our configuration by running it using a container image. We can continue to use the previous Buildfile we created to build a new container image and insert our configuration files.

$ podman build -t workshop-fb:v2 -f Buildfile

STEP 1/4: FROM cr.fluentbit.io/fluent/fluent-bit:3.0.1
STEP 2/4: COPY ./workshop-fb.conf /fluent-bit/etc/fluent-bit.conf
--> Using cache a379e761121021b24232d2284ab68e307b0b997ed2f341faa470a5e14ce43
--> a379e7611210
STEP 3/4: COPY ./inputs.conf /fluent-bit/etc/inputs.conf
--> Using cache f39b10d3d6d044aa82b719fe9170d741e7c42d01fb9d50cc8ba678f7d2cb4
--> f39b10d3d6d0
STEP 4/4: COPY ./outputs.conf /fluent-bit/etc/outputs.conf
--> Using cache cd6f623a4f5afcd19a62281d0e20963a9f38309d02c98b27afa70755d1111
COMMIT workshop-fb:v2
--> cd6f623a4f5a
Successfully tagged localhost/workshop-fb:v2
cd6f623a4f5afcd19a62281d0e20963a9f38309d02c98b27afa70755d1111832


Now we'll run our new container image, but we need a way for the container to write to the two log files so that we can check them (not internally on the container filesystem). We mount our local workshop directory to the containers tmp directory so we can see the files on our local machine as follows:

$ podman run -v ./:/tmp workshop-fb:v2


The output looks exactly like the source output above, just with different timestamps. Again you can stop the container using CTRL_C. Due to mounting the local file system to the container, if you check your current directory you will find the file output for workshop-INFO.log and workshop-ERROR.log. Validate that they contain the right events as follows:

$ cat workshop-INFO.log

[0] workshop.info: [[1709649106.731085297, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709649107.729271465, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709649108.729087599, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709649109.734200379, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
[0] workshop.info: [[1709649110.731502585, {}], {"message"=>"This is workshop INFO message", "level"=>"INFO", "color"=>"yellow"}]
...


$ cat workshop-ERROR.log

[0] workshop.error: [[1709649106.731230173, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709649107.729377883, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709649108.729115099, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709649109.734291254, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
[0] workshop.error: [[1709649110.731568252, {}], {"message"=>"This is workshop ERROR message", "level"=>"ERROR", "color"=>"red"}]
...


Be sure to scroll to the right in the above window to see the full output.

This completes our use cases for this article, be sure to explore this hands-on experience with the accompanying workshop lab.

What's Next?

This article walked us through how we can use the routing phase to process events to several output destinations using telemetry pipelines with Fluent Bit. The series continues with the next step where we'll explore a use case to modify incoming events using filters.

Stay tuned for more hands-on material to help you with your cloud-native observability journey.

Container Open source Telemetry Event

Published at DZone with permission of Eric D. Schabell, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Related

  • Telemetry Pipelines Workshop: Parsing Multiple Events
  • Telemetry Pipelines Workshop: Installing Fluent Bit in Container
  • How Observability Is Redefining Developer Roles
  • Buildpacks: An Open-Source Alternative to Chainguard

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!