Over a million developers have joined DZone.

Serverless Big Data Pipeline on AWS

Andreas Wittig discusses the benefits, use cases, and limitations of serverless.

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.

Lambda is a powerful tool when integrating different services on AWS. During the last few months, I've successfully used serverless architectures to build Big Data pipelines, and I'd like to share what I've learned with you.

The benefits of a serverless pipeline are:

  • No need to manage a fleet of EC2 instances.
  • Highly scalable.
  • Paid per execution.

A Big Data pipeline is moving data between data sources and data targets. This is often called an ETL process (extract, transform, load). The following figure describes a typical serverless Big Data pipeline:

Image title

Use Cases

Using Lambda to implement your Big Data pipeline is especially useful if you need to transform or filter data while moving from data source to data target.

Typical use cases:

  • Load CloudFront and ELB logs from S3, transform and filter data, insert into Elasticsearch cluster.
  • Load business reports from S3, transform and filter data, insert into Redshift.
  • Load event data from Kinesis stream, transform and filter data, store on S3 for further processing.

Other use cases are possible as well. Changed data (S3 and DynamoDB), external events, or a schedule (CloudWatch Event Rule) are able to trigger a Lambda function. A Lambda can access data sources and targets connected to the Internet or VPC.

Seems like there are almost no limits, no?


Lambda is a powerful tool, but compared to an EC2 instance, there are limitations as well. Some limitations when building a serverless Big Data pipeline:

  • Maximum execution duration: 300 seconds
  • Maximum memory: 1536 MB
  • Ephemeral disk capacity: 512 MB

Real world example:

  1. Load CSV file from S3.
  2. Unzip data.
  3. Transform data.
  4. Zip data.
  5. Upload to S3.

About 800 MB of unzipped data. Implementing a Lambda function following the asynchronous model of Node.js is not possible as there is neither enough memory nor disk capacity to hold the unzipped data as well as the transformed data at once.

Solution: Data Streaming

Using streaming instead of linear execution allows you to extract, transform, and load data in chunks from the beginning to the end of the pipeline.

The following source code contains an example implementing a stream for the described scenario in Node.js:

  1. Load csv.tgz file from S3.
  2. Unzip data.
  3. Split at the end of the line.
  4. Transform data.
  5. Zip data.
  6. Upload file to S3.
var AWS = require("aws-sdk");  
var zlib = require("zlib");  
var split = require("split");  
var transform = require("stream-transform");

var sourceBucket = "BUCKET_NAME";  
var sourceKey = "KEY";  
var targetBucket = "BUCKET_NAME";  
var targetKey = "KEY";

var s3 = new AWS.S3();

var transformer = transform(function(record, callback) {  
    // TODO transform
    callback(null, record);

var pipeline = s3.getObject({ // (1)  
    Bucket: sourceBucket,
    Key: sourceKey
    .pipe(zlib.createGunzip()) // (2)
    .pipe(split()) // (3)
    .pipe(transformer) // (4)
    .pipe(zlib.createGzip()); // (5)

// (6)
s3.upload({"Bucket": targetBucket, "Key": targetKey, "Body": pipeline}, function(err) {  
    if (err) {

This approach allows you to process data without hitting the memory or disk space limitations. Of course, the maximum execution duration of 300 seconds is still limiting the maximum throughput of your serverless data pipeline. If you are hitting the limit, you need to split your data into smaller chunks.

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

amazon web services,lambda,serverless architecture,big data,redshift,s3

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}