Migrate Your Elastic Beanstalk Workers to Docker Containers

DZone 's Guide to

Migrate Your Elastic Beanstalk Workers to Docker Containers

Moving can be such a process, but with this simple tutorial and a tool created by this developer, moving your ELB worker is easy.

· Cloud Zone ·
Free Resource

Amazon Elastic Beanstalk is one of the most popular services that AWS provides. Elastic Beanstalk comes with two options, the worker and the web application.

The worker application consumes the messages from a SQL queue and processes them. If the process was successful the message is removed from the queue, but if not the message shall remain in the queue or after some failed attempts, it will go back to a dead letter queue. If you want to get more into ELB I have made a tutorial on deploying your Spring application using ELB and AWS CloudFormation.

Elastic Beanstalk workers are really great because they are managed, they can be scaled up/down depending on your workloads, they provide a wide variety of development environments like Java, Node.js, and also you can use Docker images.

Although Elastic Beanstalk can work wonders, if you have an AWS-based infrastructure you might face some issues when you will try to move to a container based infrastructure using a container orchestration engine.

Your containerized worker application will most likely work seamlessly without any extra configuration; however, you need to find an alternative for the agent which dispatches the queue messages to your application.

In order to make things simple I implemented a mechanism which retrieves the messages from the queue and sends them to the worker application.

The container-queue-worker projects aim to provide an easy way to migrate your Elastic Beanstalk workers to a docker orchestration system.

Since the solution is Scala-based it can either be used as a standalone JVM application or it can be run in a container using the image from Dockerhub.

Once set up, here's what you need to add the routing configurations.

This can be done using environmental variables

WORKER_AWS_QUEUE_ENDPOINT=http://{amazon queue endpoint}

Or if you use it as a container you can add a config file on the /etc/worker/worker.conf path.

worker {
  type =  sqs
  server-endpoint = http://{docker-service}
  aws {
    queue-endpoint =  http://{amazon queue endpoint}

In order to make thing easier for you, I added a Docker Compose file simulating the desired outcome.

version: '3.5'
    name: queue-worker-network
      context: ./worker-server
      dockerfile: Dockerfile
      - 8080:8080
      - queue-worker-network
      context: ./elasticmq
      dockerfile: Dockerfile
      - 9324:9324
      - queue-worker-network
    image: gkatzioura/container-queue-worker:0.1
      - elasticmq
      - worker-server
      WORKER_TYPE: sqs
      WORKER_SERVER_ENDPOINT: http://worker-server:8080/
      WORKER_AWS_QUEUE_ENDPOINT: http://elasticmq:9324/queue/test-queue
      AWS_DEFAULT_REGION: eu-west-1
      AWS_ACCESS_KEY_ID: access-key
      AWS_SECRET_ACCESS_KEY: secret-key
      - queue-worker-network
aws ,cloud ,docker ,elastic beanstalk ,tutorial

Published at DZone with permission of Emmanouil Gkatziouras , DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}