Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Serverless: Building a Mini Producer/Consumer Data Pipeline

DZone's Guide to

Serverless: Building a Mini Producer/Consumer Data Pipeline

Setting up your own serverless data pipeline is easy. Here, we'll use the Serverless toolkit and a Python script to call an API and load that data into a database.

· Cloud Zone
Free Resource

Are you joining the containers revolution? Start leveraging container management using Platform9's ultimate guide to Kubernetes deployment.

I wanted to create a little data pipeline with Serverless. Its main use would be to run once a day, call an API, and load that data into a database.

It’s mostly used to pull in recent data from that API, but I also wanted to be able to invoke it manually and specify a date range.

I created the following pair of lambdas that communicate with each other via an SNS topic.

The Code

serverless.yml:

service: marks-blog

frameworkVersion: ">=1.2.0 <2.0.0"

provider:
  name: aws
  runtime: python3.6
  timeout: 180
  iamRoleStatements:
    - Effect: 'Allow'
      Action:
        - "sns:Publish"
      Resource:
        - ${self:custom.BlogTopic}

custom:
  BlogTopic:
    Fn::Join:
      - ":"
      - - arn
        - aws
        - sns
        - Ref: AWS::Region
        - Ref: AWS::AccountId
        - marks-blog-topic

functions:
  message-consumer:
    name: MessageConsumer
    handler: handler.consumer
    events:
      - sns:
          topicName: marks-blog-topic
          displayName: Topic to process events
  message-producer:
    name: MessageProducer
    handler: handler.producer
    events:
      - schedule: rate(1 day)


handler.py:

import boto3
import json
import datetime
from datetime import timezone

def producer(event, context):
    sns = boto3.client('sns')

    context_parts = context.invoked_function_arn.split(':')
    topic_name = "marks-blog-topic"
    topic_arn = "arn:aws:sns:{region}:{account_id}:{topic}".format(
        region=context_parts[3], account_id=context_parts[4], topic=topic_name)

    now = datetime.datetime.now(timezone.utc)
    start_date = (now - datetime.timedelta(days=1)).strftime("%Y-%m-%d")
    end_date = now.strftime("%Y-%m-%d")

    params = {"startDate": start_date, "endDate": end_date, "tags": ["neo4j"]}

    sns.publish(TopicArn= topic_arn, Message= json.dumps(params))


def consumer(event, context):
    for record in event["Records"]:
        message = json.loads(record["Sns"]["Message"])

        start_date = message["startDate"]
        end_date = message["endDate"]
        tags = message["tags"]

        print("start_date: " + start_date)
        print("end_date: " + end_date)
        print("tags: " + str(tags))


Trying It Out

We can simulate a message being received locally by executing the following command:

$ serverless invoke local \
    --function message-consumer \
    --data '{"Records":[{"Sns": {"Message":"{\"tags\": [\"neo4j\"], \"startDate\": \"2017-09-25\", \"endDate\": \"2017-09-29\"  }"}}]}'

start_date: 2017-09-25
end_date: 2017-09-29
tags: ['neo4j']
null


That seems to work fine. What about if we invoke the message-producer on AWS?

$ serverless invoke --function message-producer

null


Did the consumer receive the message?

$ serverless logs --function message-consumer

START RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f Version: $LATEST
start_date: 2017-09-29
end_date: 2017-09-30
tags: ['neo4j']
END RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f
REPORT RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65fDuration: 0.46 msBilled Duration: 100 ms Memory Size: 1024 MBMax Memory Used: 32 MB


Looks like it! We can also invoke the consumer directly on AWS:

$ serverless invoke \
    --function message-consumer \
    --data '{"Records":[{"Sns": {"Message":"{\"tags\": [\"neo4j\"], \"startDate\": \"2017-09-25\", \"endDate\": \"2017-09-26\"  }"}}]}'

null


And now if we check the consumer’s logs, we’ll see both messages:

$ serverless logs --function message-consumer

START RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f Version: $LATEST
start_date: 2017-09-29
end_date: 2017-09-30
tags: ['neo4j']
END RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65f
REPORT RequestId: 0ef5be87-a5b1-11e7-a905-f1387e68c65fDuration: 0.46 msBilled Duration: 100 ms Memory Size: 1024 MBMax Memory Used: 32 MB

START RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3ed Version: $LATEST
start_date: 2017-09-25
end_date: 2017-09-26
tags: ['neo4j']
END RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3ed
REPORT RequestId: 4cb42bc9-a5b1-11e7-affb-99fa6b4dc3edDuration: 16.46 msBilled Duration: 100 ms Memory Size: 1024 MBMax Memory Used: 32 MB


Success!

Using Containers? Read our Kubernetes Comparison eBook to learn the positives and negatives of Kubernetes, Mesos, Docker Swarm and EC2 Container Services.

Topics:
cloud ,data pipeline ,serverless ,python ,tutorial

Published at DZone with permission of Mark Needham, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}