Over a million developers have joined DZone.
Platinum Partner

Batch Processing Best Practices

· Performance Zone

The Performance Zone is brought to you in partnership with New Relic. Quickly learn how to use Docker and containers in general to create packaged images for easy management, testing, and deployment of software.

Most applications have at least one batch processing task, executing a particular logic in the background. Writing a batch job is not complicated but there are some basic rules you need to be aware of, and I am going to enumerate the ones I found to be most important.

From an input type point of view, the processing items may come through polling a processing item repository or by being pushed them into the system through a queue. The following diagram shows the three main components of a typical batch processing system:

  • the input component (loading items by polling or from an input queue)
  • the processor: the main processing logic component
  • the output component: the output channel or store where results will sent


1. Always Poll in Batches

You should only retrieve a batch of items at a time. I have recently had to diagnose an OutOfMemoryError thrown by a scheduled job while trying to retrieve all possible items for processing.

The system integration tests were passing as they were using small amounts of data, but when the scheduled job was offline for two days because of some deployment issue, the number of items (to be processed) had accumulated since there was no one to consume them, and when the scheduler went back online, it couldn't consume those, since they didn't fit the scheduler memory heap. So setting a high scheduling frequency rate is not enough.

To prevent this situation you need to get only a batch of items, consume them, and then you can rerun the process until there is nothing left to process.

2. Write a Thread-Safe Batch Processor

Typically a scheduled job should run correctly no matter how many jobs you choose to run in parallel. So the batch processor should be stateless, using only a local job execution context to pass state from one component to the other. Even tread-safe global variables are not so safe after all, since jobs' data might get mixed up on concurrent executions.

3. Throttling

When using queues (input or within the batch processor) you should always have a throttling policy. If the items producing rate is always higher than the consuming one you are heading for disaster. If the queued items are held in memory, you'll eventually run out of it. If the items are stored in a persisted queue, you'll run out of space. So, you need a mechanism of balancing the producers and consumers. As long as the producing rate is finite you just to make sure you have the right number of consumers to balance out the producing rate.

Auto-scaling consumers like starting new ones whenever the queue size grows beyond a given threshold is a suitable adaptive strategy. Killing consumers as the queue size goes below some other threshold allows you to free unnecessary idle threads.

The create-new-consumer threshold should be greater than the kill-idle one because if they were equal you would get a create-kill jitter when the queue size fluctuates around the threshold size.

4. Storing Job Results

Storing job results in memory is not very thought-out. Choosing a persistence storage (MongoDb capped collection) is a better option.

If the results are held in memory and you forget to limit them to an upper bound, your batch processor will eventually run out of memory. Restarting the scheduler will wipe out your previous job results, and those are extremely valuable, since it's the only feedback you get.

5. Flooding External Service Providers

for(GeocodeRequest geocodeRequest : batchRequests) {

This code is flooding your map provider, since as soon as you finish a request a new one will be issued almost instantly, putting a lot of pressure on their servers. If the batchRequests number is high enough you might get banned.

You should add a short delay between requests, but don't put your current tread to sleep, use a EIP Delayer instead.

6. Use a EIP style programming or Your Batch Processor

While the procedural style programming is the default mind-set of most programmers, many batch processing tasks fit better on an Enterprise Integration Patterns design. All the aforementioned rules are easier to implement using EIP tools such as:

  • message queues
  • polling channels
  • transformers
  • splitters/aggregators
  • delayers

Using EIP components eases testing, since you are focusing on a single responsibility at a time. The EIP components communicate through messages conveyed by queues, so changing one synchronous processing channel to a thread-pool dispatched one is just a configuration detail.

For more about EIP you can check the excellent Spring Integration framework. I've been using it for three years now, and after you get inoculated you would prefer it over procedural programming.

The Performance Zone is brought to you in partnership with New Relic. Read more about providing a framework that gets you started on the right path to move your IT services to cloud computing, and give you an understanding as to why certain applications should not move to the cloud.

performance,spring integration,batch processing

Published at DZone with permission of Vlad Mihalcea , DZone MVB .

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}