DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • SQL Interview Preparation Series: Mastering Questions and Answers Quickly
  • Architecture and Code Design, Pt. 1: Relational Persistence Insights to Use Today and On the Upcoming Years
  • Cloud Database Services Compared: AWS, Microsoft, Google, and Oracle
  • Data Quality Metrics to Know and Measure

Trending

  • Streamlining Event Data in Event-Driven Ansible
  • Beyond Linguistics: Real-Time Domain Event Mapping with WebSocket and Spring Boot
  • AI, ML, and Data Science: Shaping the Future of Automation
  • Apache Doris vs Elasticsearch: An In-Depth Comparative Analysis
  1. DZone
  2. Data Engineering
  3. Databases
  4. Inserting Data Into a Database Using Batch Job

Inserting Data Into a Database Using Batch Job

Learn how batch processing can be used with the MuleSoft database connector to insert data into a database in Mule using a batch job.

By 
Ajeeta Koundle user avatar
Ajeeta Koundle
·
Oct. 30, 17 · Tutorial
Likes (2)
Comment
Save
Tweet
Share
20.0K Views

Join the DZone community and get the full member experience.

Join For Free

To insert data into a database using a batch job, let’s first understand what batch processing and the database connector are in Mule.

Batch Processing

We can process messages in Mule with the help of batches. Batch Job is a block of code that splits messages into individual records, performs actions upon each record, then reports on the results and potentially pushes the processed output to other systems or queues. This functionality is particularly useful when working with streaming input or when engineering "near real-time" data integration between SaaS applications.

A batch job contains one or more batch steps which, in turn, contain any number of message processors that act upon records as they move through the batch job. During batch processing, you can use record-level variables (recordVars) and MEL expressions to enrich, route or otherwise act upon records.

A batch job executes when triggered by either a batch executor in a Mule flow or a message source in a batch-accepting input; when triggered, Mule creates a new batch job instance. When all records have passed through all batch steps, the batch job instance ends and the batch job result can be summarized in a report to indicate which records succeeded and which failed during processing.

Batch Processing Phases

There are 4 phases in Batch Processing:

  1. Input (Optional):

    1. Triggers the processing via an inbound endpoint.

    2. Modifies the payload as needed before batch processing.

  2. Load and Dispatch (Implicit):

    1. Performs "behind the scenes" work.

    2. Splits payload into a collection of records and creates a queue.

  3. Process (Required):

    1. Asynchronously processes the records.

    2. Contains one or more batch steps.

  4. On Complete (Optional):

    1. Reports the summary of records processed.

    2. Get insight into which records failed so can address issues.

Database Connector

The database connector allows you to connect with almost any Java Database Connectivity (JDBC) relational database using a single interface for every case. The database connector allows you to run diverse SQL operations on your database, including Select, Insert, Update, Delete, and even Stored Procedures.

To use a database connector in your Mule application:

  1. Check that your database engine is Database Engines Supported Out of the Box; if it is not, then add the database driver for your database engine.

  2. Configure a Database Global Element where you define

    1. Your database’s location and connection details.

    2. Whether DataSense is enabled.

    3. Advanced connection parameters such as connection pooling.

  3. Configure the Database connector element that you embed in your Mule flow, which

    1. Contains the query to perform on the database.

    2. References the Database Global Element.

Supported Operations

The database connector supports the following operations:

  • Select

  • Insert

  • Update

  • Delete

  • Stored Procedure

  • Bulk Execute

  • DDL operations such as CREATE, ALTER, etc.

Additionally, you can run a TRUNCATE query on the database by selecting Update as the operation.

Example:

In this example, we are going to take data from a SaaS application like Salesforce using Poll, then filter the data for any duplicate values in the database, and then finally save the non-duplicate data to the database.

Steps:

  1. Create a batch job to poll the Salesforce at a specific interval of time with a specific condition (eg. lastModifiedDate).

  2. Add a DataWeave Transformer to map data.

  3. Add a batch step and use a message enricher to check if the record exists in the database (an account with the same name), store the result in the record variable, and retain the original payload.

  4. Add a second batch step with a filter that allows only new records to be added into the database.

Flow:

Image title

Code:

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns:schedulers="http://www.mulesoft.org/schema/mule/schedulers" xmlns:dw="http://www.mulesoft.org/schema/mule/ee/dw" xmlns:db="http://www.mulesoft.org/schema/mule/db" xmlns:batch="http://www.mulesoft.org/schema/mule/batch" xmlns:sfdc="http://www.mulesoft.org/schema/mule/sfdc" xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation"
xmlns:spring="http://www.springframework.org/schema/beans" 
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-current.xsd
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd
http://www.mulesoft.org/schema/mule/sfdc http://www.mulesoft.org/schema/mule/sfdc/current/mule-sfdc.xsd
http://www.mulesoft.org/schema/mule/batch http://www.mulesoft.org/schema/mule/batch/current/mule-batch.xsd
http://www.mulesoft.org/schema/mule/db http://www.mulesoft.org/schema/mule/db/current/mule-db.xsd
http://www.mulesoft.org/schema/mule/ee/dw http://www.mulesoft.org/schema/mule/ee/dw/current/dw.xsd
http://www.mulesoft.org/schema/mule/schedulers http://www.mulesoft.org/schema/mule/schedulers/current/mule-schedulers.xsd">
    <sfdc:config name="Salesforce__Basic_Authentication" username="${sfdc.username}" password="${sfdc.password}" securityToken="${sfdc.token}" doc:name="Salesforce: Basic Authentication"/>
    <batch:job name="sfdc-syncBatch">
        <batch:input>
            <poll doc:name="Poll">
                <schedulers:cron-scheduler expression="0 15 19 * * ?"/>
                <watermark variable="timestamp" default-expression="#['1900-12-11T14:16:00.000Z']" selector="MAX" selector-expression="#[payload.LastModifiedDate]"/>
                <sfdc:query config-ref="Salesforce__Basic_Authentication" query="dsql:SELECT BillingCity,BillingCountry,BillingPostalCode,BillingState,BillingStreet,LastModifiedDate,Name FROM Account WHERE LastModifiedDate &gt; #[flowVars['timestamp']]" doc:name="Salesforce"/>
            </poll>
            <dw:transform-message doc:name="Transform Message">
                <dw:set-payload><![CDATA[%dw 1.0
%output application/java
---
payload map ((payload01 , indexOfPayload01) -> {
NAME: payload01.Name,
STREET: payload01.BillingStreet,
CITY: payload01.BillingCity,
STATE: payload01.BillingState,
POSTAL: payload01.BillingPostalCode,
COUNTRY: payload01.BillingCountry
})]]></dw:set-payload>
            </dw:transform-message>
        </batch:input>
        <batch:process-records>
            <batch:step name="Batch_Step1">
                <enricher source="#[payload.size() &gt;0]" target="#[recordVars.exists]" doc:name="Message Enricher">
                    <db:select config-ref="Derby_Configuration" doc:name="Database">
                        <db:parameterized-query><![CDATA[select NAME,STREET,CITY,STATE,POSTAL,COUNTRY from accounts where NAME = #[payload.NAME]]]></db:parameterized-query>
                    </db:select>
                </enricher>
            </batch:step>
            <batch:step name="Batch_Step2" accept-expression="#[!recordVars.exists]">
                <db:insert config-ref="Derby_Configuration" doc:name="Database">
                    <db:parameterized-query><![CDATA[Insert into Accounts (name,street,city,state,postal,country ) values (#[message.payload.NAME],#[message.payload.STREET],#[message.payload.CITY],#[message.payload.STATE],#[message.payload.POSTAL],#[message.payload.COUNTRY])]]></db:parameterized-query>
                </db:insert>
            </batch:step>
        </batch:process-records>
        <batch:on-complete>
            <logger message="#[payload]" level="INFO" doc:name="Logger"/>
        </batch:on-complete>
    </batch:job>
</mule>


Database engine Relational database career Data integration

Opinions expressed by DZone contributors are their own.

Related

  • SQL Interview Preparation Series: Mastering Questions and Answers Quickly
  • Architecture and Code Design, Pt. 1: Relational Persistence Insights to Use Today and On the Upcoming Years
  • Cloud Database Services Compared: AWS, Microsoft, Google, and Oracle
  • Data Quality Metrics to Know and Measure

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!