DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Because the DevOps movement has redefined engineering responsibilities, SREs now have to become stewards of observability strategy.

Apache Cassandra combines the benefits of major NoSQL databases to support data management needs not covered by traditional RDBMS vendors.

The software you build is only as secure as the code that powers it. Learn how malicious code creeps into your software supply chain.

Generative AI has transformed nearly every industry. How can you leverage GenAI to improve your productivity and efficiency?

Related

  • Revolutionize Your MuleSoft Deployments With GitOps
  • MuleSoft Integrate With ServiceNow
  • Migrating AnypointMQ-Based Mulesoft Service to Serverless World
  • Migrating MuleSoft System API to AWS Lambda (Part 1)

Trending

  • Code Reviews: Building an AI-Powered GitHub Integration
  • Using Java Stream Gatherers To Improve Stateful Operations
  • Advancing Your Software Engineering Career in 2025
  • Chat With Your Knowledge Base: A Hands-On Java and LangChain4j Guide
  1. DZone
  2. Software Design and Architecture
  3. Integration
  4. MuleSoft Logs Integration With Datadog

MuleSoft Logs Integration With Datadog

In this article, learn how to integrate MuleSoft application logs with the external data system, Datadog, using the Log4j2 file.

By 
Vikalp Bhalia user avatar
Vikalp Bhalia
DZone Core CORE ·
Aug. 01, 22 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
12.1K Views

Join the DZone community and get the full member experience.

Join For Free

In this article, we will see how we can integrate MuleSoft application logs with the external data system using the Log4j2 file.

Some organizations want to persist logs for more than 30 days or want to build custom dashboards based on the logs; hence, there is a need to store these logs in an external data system. The easiest and most recommended way to integrate the logs with external systems is through the Log4j2 file. 

This article will integrate the MuleSoft application logs with one of the external systems, which is Datadog. The Datadog Log Management unifies logs, metrics, and traces in a single view, giving you rich context for analyzing log data. 

Integration of MuleSoft Logs With Datadog

To integrate the MuleSoft Logs with Datadog follow the steps below:

1.  Sign up at Datadog.

2.  Now we want to get the API keys. Go to Organization Settings and copy the API key. You can also choose to create a new key if you have more than one source pushing logs to Datadog.

Organization Settings

3. Now we need to modify the Log4J2 file and add the Datadog HTTP appender in the appenders section.

 
<Http name="DATADOG"
	url="https://http-intake.logs.datadoghq.com/api/v2/logs?host=${sys:hostName}&ddsource=Mulesoft&service=${sys:application.name}&ddtags=env:${sys:env}">
	<Property name="Content-Type" value="application/json" />
	<Property name="DD-API-KEY" value="${sys:ddapikey}" />
	<JsonLayout compact="false" properties="true">
	</JsonLayout>
</Http>


You can add more tags and JSON parameters as per the organization's need.

Refer to the documentation for more information on the Datadog HTTP API or for JSON layout.

4.  Now we need to refer to the above Datadog appender.

 
<AsyncRoot level="INFO">
	<AppenderRef ref="file" />
	<AppenderRef ref="DATADOG" />
</AsyncRoot>


5.  Once the changes are done in the Log4J2, we will have to pass the below parameters at runtime:

  • application.name
  • ddapikey (This we copied in step 2.)
  • env

Above are the user-defined parameters. You can name them as per the naming convention defined in your organization.

6.  Once you deploy the application, you will see logs flowing in the Datadog.

7.  To view logs in the Datadog:

  • Login to Datadog.
  • Click on the logs and you will see the MuleSoft logs.
    MuleSoft logs

8.  You can choose the column that you to see. Click on the Options button and select the columns.

Select the columns from options

9.  You can filter logs based on the left panel:
Filter logs based on left panel

Enable Custom Log4j2 in the CloudHub Runtime

When we deploy the application on the CloudHub runtime, the Log4j2 file packed in the application package is ignored and a default Log4j2 file is used by Mule runtime.

To enable the custom Log42 file, we need to select the Disable Cloudhub logs check box in the CloudHub runtime.

If then you need to enable the CloudHub logs and Datadog at the same time, we will have to add and refer CloudHub Log Appender and the Datadog appender in the Log4j2 file.

Follow the steps mentioned in the MuleSoft documentation site to enable the CloudHub log appender as well in the Log4j2 file. I am attaching a complete Log4j2 file for reference:

 
<?xml version="1.0" encoding="utf-8"?>
<Configuration>

	<!--These are some of the loggers you can enable. There are several more 
		you can find in the documentation. Besides this log4j configuration, you 
		can also use Java VM environment variables to enable other logs like network 
		(-Djavax.net.debug=ssl or all) and Garbage Collector (-XX:+PrintGC). These 
		will be append to the console, so you will see them in the mule_ee.log file. -->

	<Appenders>
		<RollingFile name="file"
			fileName="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}datadog-poc.log"
			filePattern="${sys:mule.home}${sys:file.separator}logs${sys:file.separator}datadog-poc-%i.log">
			<PatternLayout
				pattern="%-5p %d [%t] [processor: %X{processorPath}; event: %X{correlationId}] %c: %m%n" ></PatternLayout>
			<SizeBasedTriggeringPolicy size="10 MB" ></SizeBasedTriggeringPolicy>
			<DefaultRolloverStrategy max="10" ></DefaultRolloverStrategy>
		</RollingFile>
		<Http name="DATADOG"
			url="https://http-intake.logs.datadoghq.com/api/v2/logs?host=${sys:hostName}&ddsource=Mulesoft&service=${sys:application.name}&ddtags=env:${sys:env}">
			<Property name="Content-Type" value="application/json" ></Property>
			<Property name="DD-API-KEY" value="${sys:ddapikey}" ></Property>
			<JsonLayout compact="false" properties="true">
			</JsonLayout>
		</Http>
        <Log4J2CloudhubLogAppender name="CLOUDHUB"
                addressProvider="com.mulesoft.ch.logging.DefaultAggregatorAddressProvider"
                applicationContext="com.mulesoft.ch.logging.DefaultApplicationContext"
                appendRetryIntervalMs="${sys:logging.appendRetryInterval}"
                appendMaxAttempts="${sys:logging.appendMaxAttempts}"
                batchSendIntervalMs="${sys:logging.batchSendInterval}"
                batchMaxRecords="${sys:logging.batchMaxRecords}"
                memBufferMaxSize="${sys:logging.memBufferMaxSize}"
                journalMaxWriteBatchSize="${sys:logging.journalMaxBatchSize}"
                journalMaxFileSize="${sys:logging.journalMaxFileSize}"
                clientMaxPacketSize="${sys:logging.clientMaxPacketSize}"
                clientConnectTimeoutMs="${sys:logging.clientConnectTimeout}"
                clientSocketTimeoutMs="${sys:logging.clientSocketTimeout}"
                serverAddressPollIntervalMs="${sys:logging.serverAddressPollInterval}"
                serverHeartbeatSendIntervalMs="${sys:logging.serverHeartbeatSendIntervalMs}"
                statisticsPrintIntervalMs="${sys:logging.statisticsPrintIntervalMs}">

            <PatternLayout pattern="[%d{MM-dd HH:mm:ss}] %-5p %c{1} [%t]: %m%n"></PatternLayout>
        </Log4J2CloudhubLogAppender>
	</Appenders>

	<Loggers>
		<!-- Http Logger shows wire traffic on DEBUG -->
		<!--AsyncLogger name="org.mule.service.http.impl.service.HttpMessageLogger" 
			level="DEBUG"/ -->
		<AsyncLogger name="org.mule.service.http" level="WARN" ></AsyncLogger>
		<AsyncLogger name="org.mule.extension.http" level="WARN" ></AsyncLogger>

		<!-- Mule logger -->
		<AsyncLogger
			name="org.mule.runtime.core.internal.processor.LoggerMessageProcessor"
			level="INFO" ></AsyncLogger>

		<AsyncRoot level="INFO">
			<AppenderRef ref="file" ></AppenderRef>
			<AppenderRef ref="DATADOG" ></AppenderRef>
			<AppenderRef ref="CLOUDHUB" ></AppenderRef>
		</AsyncRoot>
        <AsyncLogger name="com.gigaspaces" level="ERROR"></AsyncLogger>
        <AsyncLogger name="com.j_spaces" level="ERROR"></AsyncLogger>
        <AsyncLogger name="com.sun.jini" level="ERROR"></AsyncLogger>
        <AsyncLogger name="net.jini" level="ERROR"></AsyncLogger>
        <AsyncLogger name="org.apache" level="WARN"></AsyncLogger>
        <AsyncLogger name="org.apache.cxf" level="WARN"></AsyncLogger>
        <AsyncLogger name="org.springframework.beans.factory" level="WARN"></AsyncLogger>
        <AsyncLogger name="org.mule" level="INFO"></AsyncLogger>
        <AsyncLogger name="com.mulesoft" level="INFO"></AsyncLogger>
        <AsyncLogger name="org.jetel" level="WARN"></AsyncLogger>
        <AsyncLogger name="Tracking" level="WARN"></AsyncLogger>
	</Loggers>

</Configuration>


In this article, we have learned to integrate the MuleSoft logs with an external system. Happy learning.

MuleSoft Integration

Opinions expressed by DZone contributors are their own.

Related

  • Revolutionize Your MuleSoft Deployments With GitOps
  • MuleSoft Integrate With ServiceNow
  • Migrating AnypointMQ-Based Mulesoft Service to Serverless World
  • Migrating MuleSoft System API to AWS Lambda (Part 1)

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!