The Challenges of Dockerizing a Ruby Application
The Challenges of Dockerizing a Ruby Application
The choice of base image for a Ruby application can make a huge difference in the final image size.
Join the DZone community and get the full member experience.Join For Free
Download the blueprint that can take a company of any maturity level all the way up to enterprise-scale continuous delivery using a combination of Automic Release Automation, Automic’s 20+ years of business automation experience, and the proven tools and practices the company is already leveraging.
Containers are great and they are gaining more popularity all the time. Containerization is replacing virtualization by removing the hypervisor layer and allowing isolated container processes to run on a shared kernel instead (Image 1). The most important benefit of containers is start time. While a fully virtualized system usually takes minutes to start, containers take seconds, and sometimes even less than a second. With containers there is also a standard for how to package, deliver, and deploy an application.
Image 1: Moving from virtualization to containerization
To start putting an application into a Docker container, a Dockerfile is needed. It’s like a source code of the Docker image. In the Dockerfile all the steps are defined that are required to get application and its environment up and running.
If the Dockerfile is the source code then a Docker image is the compiled version of it. Actually it’s not a single image, but a set of image layers. Image layers are cached so the whole Docker image need not be updated if Dockerfile changes. The later the change is in the Dockerfile, the fewer image layers are required to update.
Ruby Base Images
Every Docker image extends some base image. Typically, a base image contains an OS and common libraries and packages. For Ruby developers, there are official Ruby base images that contain specific Ruby versions built-in. The official Ruby base images are:
ruby:version is the de-facto Ruby base image. In addition to Ruby, it contains a large number of extremely common Debian packages.
ruby:onbuild base image is perhaps the easiest one to start with. You need to just extend this image and you are ready to go. It wraps your application automatically into a Docker image at build time. However, it's not recommended for long-term usage within a project due to the lack of control.
ruby:slim is still Debian-based but it only contains the minimal packages needed to run Ruby. This is a good choice when you want to use Debian packages and define your environment by yourself.
ruby:alpine is based on Alpine Linux. It’s the smallest ruby base image, but the main caveat is that it uses musl libc instead of glibc and friends, so certain software might run into issues depending on the depth of their libc requirements.
The Debian-based base image may be easier to start with, but it comes at the cost of larger image size (Image 2). It is almost six times bigger than the image based on Alpine Linux. Besides the size itself, smaller images are faster to transfer and make your environment small and efficient. Small images also improve security as you reduce your security footprint size.
Image 2: Sizes of the Official Ruby Images
Of course one option is not to use any of official Ruby base images, but to use another base image and build the whole Ruby environment from scratch. Then you have total control over what libraries and packages you want to include in your Docker image.
Docker Best Practices
When running application in containers there are couple of rules to follow:
Run One Process Per Container
Decoupling applications into multiple containers makes it much easier to scale horizontally and reuse containers. You can also define Docker to monitor the running process of the container and when Docker recognizes that the process has exited, it will restart it automatically
Use a .dockerignore File
To increase the build’s performance, you can exclude files and directories by adding a .dockerignore file to that directory as well. This file supports exclusion patterns similar to .gitignore files
Use the Twelve-Factor Apps Paradigm
If you are running your application on Heroku you are used to using the twelve-factor apps paradigm. Docker and containers supports natively this kind of paradigm so if you are not yet familiar with it, you can read more about on http://12factor.net/
Don’t Rely On IP Addresses
Docker will generate an IP address for each container. However, the IP address will change every time container is re-created, so you can’t really rely on those addresses. Instead, you have to use some service discovery and DNS.
Our example application is a simple Sinatra based application with MongoDB database. You can see all the source code and Docker files here: https://github.com/kontena/todo-example.
We will use the Alpine Linux based Ruby base image. First, we are adding Gemfile and Gemfile.lock files to the Docker image. After that, we install Bundler and run bundle install. To reduce the size of the image we will remove build-time dependencies from the Docker image after dependencies are installed. Finally, we will add our application into Docker image and set some permissions and expose a port that the application will listen to. Based on that, Docker can route traffic correctly to container’s port.
FROM ruby:2.3.1-alpine ADD Gemfile /app/ ADD Gemfile.lock /app/ RUN apk --update add --virtual build-dependencies ruby-dev build-base && \ gem install bundler --no-ri --no-rdoc && \ cd /app ; bundle install --without development test && \ apk del build-dependenciesADD . /app RUN chown -R nobody:nogroup /app USER nobody ENV RACK_ENV production EXPOSE 9292 WORKDIR /app
We can build the Docker image by executing
docker build -t todoapp:latest .. This will generate a Docker image from the Dockerfile found in the current directory and tag it as
We can run our application container from the Docker image manually with the
docker run command. However, the better way is to run all application services with Docker Compose. Docker Compose is a tool for defining and running multi-container Docker applications. Application services and their configurations can be defined in docker-compose.yml file:
version: '2’ services: web: image: todoapp:latest command: bundle exec puma -p 9292 -e production environment: - MONGODB_URI=mongodb://mongodb:27017/todo_production ports: - 9292:9292 links: - mongodb:mongodb mongodb: image: mongo:3.2 command: mongod --smallfiles
So, we are defining here one
web service that is using our todoapp Docker image. Then we have a MongoDB service from mongo:3.2 image and it’s linked to our web application as
We can deploy the whole application with
docker-compose up command.
So it’s relatively easy to Dockerize Ruby application and run it locally. When rolling to production, things are not that simple anymore. There are multiple things to consider:
- How big this app will be? How many users it will serve?
- Do you want your application to be infrastructure agnostic or lean heavily on some cloud provider?
- How to run databases or save other persistent data?
- How to scale the application and handle load balancing?
- How do you pass sensitive data to your application and where to store that data?
- How the application can be deployed and updated with zero down-time?
You can solve all those things by yourself, but it would be a long and rocky road. Instead, you should choose a container platform that suits your needs best.
Kontena is a new open source Docker platform including orchestration, service discovery, overlay networking and all the tools required to run your containerized workloads. Kontena is built to maximize developer happiness. It works on any cloud, it's easy to setup and super simple to use. Give it a try! If you like it, please star it on Github and follow us on Twitter!
Published at DZone with permission of Lauri Nevala , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.