Over a million developers have joined DZone.

How To Virtualize Your Development Process Using Docker and Vagrant

In this guide, we will show you how to Dockerize your application for the very first time so the you can easily share and deploy it on any machine which supports Docker. Also, we will discuss how to run Docker almost anywhere using Vagrant.

· Cloud Zone

Download the Essential Cloud Buyer’s Guide to learn important factors to consider before selecting a provider as well as buying criteria to help you make the best decision for your infrastructure needs, brought to you in partnership with Internap.


Hello guys, my name is Andrii Dvoiak and I'm a full stack web developer at Ukrainian startup called Preply.com.

Preply is a marketplace for tutoring. The platform where you can easily find a personal professional tutor based on your location and needs. We have more then 15 thousands tutors of 40 different subjects. 

We have been growing pretty fast for the last year in terms of customers and also our team,
so we decided to push our development process to the next level. First of all, we decided to organize and standardize our development environment so we can easily onboard new dev team members. 

We spent a lot of time researching the best practices and talking to other teams to understand what is the most efficient way to organize sharable development environments especially when your product is decoupled into many microservices.

DISCLAIMER: Yes, we are big fans of microservices and Docker technology.

So, we ended up with two main technologies: Vagrant and Docker.

You can read about all aspects and differences from creators of these services here on StackOverflow

In this guide, we will show you how to Dockerize your application for the very first time so the you can easily share and deploy it on any machine which supports Docker. Also, we will discuss how to run Docker almost anywhere using Vagrant. 

Note: We think that Vagrant is overhead in this stack and you only need to use it on Windows or other platforms which don't support Docker natively. But there are a few benefits from Vagrant; we will discuss them in the second part of this article.

By the end of the day, you will have your Docker container ready to deploy on the production server and development VM in the same way.

Before We Start

We are going to deal with a simple Django app with a PostgreSQL Database and Redis as a broker for Celery Tasks.

Also, we are using Supervisor to run our Gunicorn server. 

Docker Compose technology will help us to orchestrate our multi-container application.
Note that Compose 1.5.1 requires Docker 1.8.0 or later.
This will help us to run Django app, PostgreSQL, Redis Server, and Celery Worker in separated containers and link them between each other.

To make it all happen, we only need to create a few files in the root directory of our Django project (next to manage.py file):

  1. Dockerfile - (to build an image and push it to DockerHub)
  2. redeploy.sh - (to make redeploy in both DEV and PROD environments)
  3. docker-compose.yml - (to orchestrate all containers)
  4. Vagrantfile - (to provision a virtual machine for development environment)

We are using environment variable RUN_ENV to specify current environment. You type export RUN_ENV=PROD on production server and export RUN_ENV=DEV on your local machine or Vagrant VM.



As yoImage titleu may know, there are two main things in Docker: images and containers.

We are going to create an image based on Ubuntu with installed Python, PIP and other Tools needed to run the Django app. Also, this basic image will contain all you requirements pre-installed. We will push this image to the public repository in DockerHub. Note that this image doesn't contain any of your project files

We all agreed to keep our image on DockerHub always up-to-date. From that image, we are going to run a container exposing specific ports and mounting your local directory with the project to some folder inside the container. This means that all your project files will be accessible within the container. No need to copy files! 

Note: this is very useful for the development process because you can change your files and they will instantly change in running Docker container, but it is unacceptable in the production environment.

Once we run a container, we will start supervisor with our Gunicorn server. 

Once we want to do redeploy, we will pull new data from GitHub, stop and remove the existing container and run a brand-new container! Magic!

Let's start!

Docker and Docker-Compose Installation

Before we start, we need to install Docker and Docker-Compose on our local computer or server.
Here is the code to do it on Ubuntu 14.04: 

sudo -i
echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
curl -sSL https://get.docker.com/ | sh
curl -L https://github.com/docker/compose/releases/download/1.5.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
usermod -aG docker ubuntu
sudo reboot

where ubuntu is your current user

If your current development environment is not Ubuntu 14.04, you'd do better to use Vagrant
to create this environment. Start from the Vagrant section of this tutorial.

Docker Image

First off, to prepare the project for deployment with Docker, we need to build an image
with only Python, PIP, and some pip requirements needed to run Django.

Lets create a new dockerfile called Dockerfile in rhe root directory of the project.
It will look like this:

FROM ubuntu:14.04
MAINTAINER Andrii Dvoiak

RUN echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
RUN apt-get update
RUN apt-get install -y python-pip python-dev python-lxml libxml2-dev libxslt1-dev libxslt-dev libpq-dev zlib1g-dev && apt-get build-dep -y python-lxml && apt-get clean
# Specify your own RUN commands here (e.g. RUN apt-get install -y nano)

ADD requirements.txt requirements.txt
RUN pip install -r requirements.txt

WORKDIR /project


You can specify your own RUN commands, for example, to install other needed tools

Then you need to build an image from this, using the command:

docker build -t username/image .

(where 'username' - your Dockerhub username and 'image' - name of your new image for this project)

When you successfully built an image, you should push it to your DockerHub cloud:

docker push username/image

And, don't worry, this image doesn't have any information about your project (besides requiremets.txt file).

Note: if you use the private DockerHub repository, be sure to execute docker login before pushing and pulling images.

Orchestrating Containers

In our current stack, we have to start and run at least four containers with the Redis server, PostgreSQL, Celery, and Django in a specific order and the best way to do it is to use some orchestration tools. 

To make this process simple we are going to use Docker Compose technology which allows us to create a simple YML file with instructions of what containers to run and how to link them to each other.

Let's create this magic file and call it by default docker-compose.yml:

  image: username/image:latest
  command: python manage.py supervisor
   - "80:8001"
   - .:/project
   - redis
   - postgres
  restart: unless-stopped

  image: username/image:latest
  command: python manage.py celery worker -l info
   - postgres
   - redis
  restart: unless-stopped

  image: postgres:9.1
    - local_postgres:/var/lib/postgresql/data
   - "5432:5432"

  image: redis:latest
  command: redis-server --appendonly yes

As you can see, we are going to run four projects called djangocelery_workerpostgres, and redis. These names are important for us.

First, it will pull the Redis image from dockerhub and run the container from it.
Second, it will pull the Postgres image and run the container with mounted data from volume local_postgres. (We will discuss how to create persistent volumes in the next section of this article).
Then, it will start the container with our Django app, forward 8001 port from inside to 80 outside,
mount your current directory straight to /project folder inside container, link it with Redis and Postgres containers, and run a supervisor. And, last but not least is our container with celery worker, which is also linked with Postgres and Redis. 

You can forward any number of ports if needed—just add new lines to links section.
Also, you can link any number of containers, for example, container with your database.

Use tag :latest for automatic checking for updated image on DockerHub

If you named the project with Redis Server in YML file redis , you need to specify redis instead of localhost in your settings.py to allow your Django app to connect to Redis:

REDIS_HOST = "redis"
BROKER_URL = "redis"

The same with Postgres database: use postgres instead of localhost.

    'default': {
        'ENGINE': 'django.db.backends.postgresql_psycopg2',
        'NAME': 'database_name',
        'USER': os.getenv('DATABASE_USER', ''),
        'PASSWORD': os.getenv('DATABASE_PASSWORD', ''),
        'HOST': 'postgres',
        'PORT': '5432',

Redeploy Script

Let's create re-deploy script to do redeploy in one click (let's call it redeploy.sh):

if [ -z "$RUN_ENV" ]; then
    echo 'Please set up RUN_ENV variable'
    exit 1

if [ "$RUN_ENV" = "PROD" ]; then
    git pull

docker-compose stop
docker-compose rm -f
docker-compose up -d

Let's check what it does:

  1. It checks if variable RUN_ENV is set and exit if it doesn't
  2. If RUN_ENV is set to PROD, it will do 'git pull' command to get a new version of your project
  3. It will stop all projects, specified in docker-compose.yml file
  4. It will remove all existing containers
  5. It will start new containers

So, it looks pretty simple: to do re-deploy you just need to run ./redeploy.sh ! 

Do not forget to grant rights for execution(chmod +x redeploy.sh) for this script.

To make a quick redeploy use this command:

docker-compose up --no-deps -d django

But, we don't know how it will start working inside the container. Let's take a look in the next section!

Deploying Inside the Container

We just need to run supervisor to actually start our service inside the container:

python manage.py supervisor

If you don't use supervisor, you can just start your server instead of supervisor.

If you use supervisor let's take a look in the 'supervisord.conf' file:



command=gunicorn -w 4 -b YourApp.wsgi:application
directory={{ PROJECT_DIR }}
stdout_logfile={{ PROJECT_DIR }}/gunicorn.log

So, the supervisor will start your Gunicorn server with 4 workers and bind on 8001 port. 

Development Process

  1. To access your local server you just go to http://localhost/
  2. To redeploy local changes do ./redeploy.sh
  3. To see all logs of your project run:
    docker-compose logs
  4. To quickly redeploy changes use:
    docker-compose restart django
  5. To connect to Django shell just do (if you have running container called CONTAINER):
    docker exec -it CONTAINER python manage.py shell
    and run this command to create initial superuser:
    from django.contrib.auth.models import User; User.objects.create_superuser('admin', 'admin@example.com', 'admin')
  6. To make migrations:
    docker exec -it CONTAINER python manage.py schemamigration blabla --auto  
  7. Or you can connect to bash inside the container:
    docker exec -it CONTAINER /bin/bash
  8. You can dump your local database to .json file (you can specify a table to dump):
    docker exec -it CONTAINER python manage.py dumpdata > testdb.json
  9. Or you can load data to your database from file:
    docker exec -it CONTAINER python manage.py loaddata testdb.json  
  10. Use this command to monitor status of your running containers: 
    docker stats $(docker ps -q)
  11. Use this command to delete all stopped containers:
    docker rm -v `docker ps -a -q -f status=exited`

Where CONTAINER is supposed to be project_django_1

You can play with your containers as you want. Here is a useful Docker cheat sheet



The only reason we use Vagrant is to be able to create an isolated development environment based on a virtual machine with all the necessary services to reproduce a real production environment. In other words, just to be able to run Docker.

This environment can be easily and quickly installed to any machine of your co-workers. To do this you only need to install Vagrant and VirtualBox (or another provider) to the local machine.

As we mentioned before, Vagrant could be overhead but we still wanted to cover this part of the process.


To set up a virtual machine, we need to create a file called Vagrantfile in the root dir of our project.

This is a good example of Vagrantfile:

# -*- mode: ruby -*-
# vi: set ft=ruby :

Vagrant.configure(2) do |config|

  config.vm.box = "ubuntu/trusty64"

  # We are going to run docker container on port 80 inside vagrant and expose it to port 8000 outside vagrant
  config.vm.network :forwarded_port, guest: 80, host: 8000

  #You can forward any number of ports:
  #config.vm.network :forwarded_port, guest: 5555, host: 5555

  # All files from current directory will be available in /project directory inside vagrant
  config.vm.synced_folder ".", "/project"

  # We are going to give VM 1/4 system memory & access to all cpu cores on the host
  config.vm.provider "virtualbox" do |vb|
    host = RbConfig::CONFIG['host_os']
    if host =~ /darwin/
      cpus = `sysctl -n hw.ncpu`.to_i
      mem = `sysctl -n hw.memsize`.to_i / 1024 / 1024 / 4
    elsif host =~ /linux/
      cpus = `nproc`.to_i
      mem = `grep 'MemTotal' /proc/meminfo | sed -e 's/MemTotal://' -e 's/ kB//'`.to_i / 1024 / 4
      cpus = 2
      mem = 1024
    vb.customize ["modifyvm", :id, "--memory", mem]
    vb.customize ["modifyvm", :id, "--cpus", cpus]

  config.vm.provision "shell", inline: <<-SHELL
    # Install docker and docker compose
    sudo -i
    echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
    curl -sSL https://get.docker.com/ | sh
    usermod -aG docker vagrant
    curl -L https://github.com/docker/compose/releases/download/1.5.1/docker-compose-`uname -s`-`uname -m` > /usr/local/bin/docker-compose
    chmod +x /usr/local/bin/docker-compose


We are going to use this file to start our virtual machine by typing this command:

vagrant up --provision

If you see the warning about new version of your initial box just do:

vagrant box update

Let's look closer what it does:

  1. It will pull an image of ubuntu/trusty64 operation system from vagrant repository
  2. It will expose 80 from inside the machine to 8000 outside
  3. It will mount your current directory to '/project' directory inside the machine
  4. It will give 1/4 system memory & access to all CPU cores from the virtual machine
  5. It will install Docker and Docker-Compose inside the virtual machine

This is it. Now you just need to go inside this VM by typing:

vagrant ssh

Let's check if everything is properly installed:

docker --version
docker-compose --version

If not, try to install it manually! Docker and Docker-Compose

Let's go to see our project dir inside the VM and set environment variable RUN_ENV to DEV:

cd /project
export RUN_ENV=DEV

Now you can do local redeploy as simple as:


Enjoy your local server on

Closing Down Virtual Machine

You can use three different commands to stop working with your VM:

vagrant suspend     # - to freeze your VM with saved RAM
vagrant halt        # - to stop your VM but save all files
vagrant destroy     # - to fully delete your virtual machine!

Additional Information

Creating Local Database in Container


We assume that you use Docker 1.9 and Docker-Compose 1.5.1 or later. 

It was a mess with volumes in Docker before ver. 1.9. And, now Docker introduced Volume!
You can create volume independently from containers and images. It allows you to have persistent volume accessible from any number of mounted containers and keeps data safe even when containers are dead. 

It's very simple. To list all your local volumes do:

docker volume ls

To inspect the volume:

docker volume inspect <volume_name>

To create a new volume:

docker volume create --name=<volume_name>

And, to delete a volume just do:

docker volume rm <volume_name>

TIP: Now you can remove old volumes created by some containers in the past.

Creating Local Database

We are going to run a Docker container with PostgreSQL and mount it to the already created Docker Volume.

Let's create a local volume for our database:

docker volume create --name=local_postgres

We are going to use the temporary file docker-compose.database.yml :

      image: postgres:9.1
      container_name: some-postgres
        - local_postgres:/var/lib/postgresql/data
        - "5432:5432"

And, run a container with PostgreSQL:

docker-compose -f docker-compose.database.yml up -d

Now you have an empty database running in the container and accessible on localhost:5432 (or another host if you run Docker on Mac).

You can connect to this container:

docker exec -it some-postgres /bin/bash

Switch to the right user:

su postgres



And, for example, create a new database:


To exit  PSQL type:


You can easily remove this container by:

docker-compose -f docker-compose.database.yml stop
docker-compose -f docker-compose.database.yml rm -f

All created data will be stored in the local_postgres volume. Next time you need a database, you can run a new container and you will already have your test database created.

Dumping Local Database to File

Let's say you want to dump data from your local database. You need to run a simple container and mount your local directory to some directory in the container.
Then you go inside the container and dump data to file in this mounted directory.

Let's do it:

  image: postgres:9.1
  container_name: some-postgres
    - local_postgres:/var/lib/postgresql/data
    - .:/data
    - "5432:5432"

In this example, we are going to mount our current directory to directory /data inside the container.

Go inside:

docker exec -it some-postgres /bin/bash
su postgres

And dump:

pg_dump --host 'localhost' -U postgres test -f /data/test.out

This command will dump the test database to the file test.out in your current directory.
You can delete this container now.

Loading Data from File

Use the same technique with mounting directories to load data from file. 

Use this command to load data if you are already in the Postgres container and have this /data/test.out mounted.

psql --host 'localhost' --username postgres test < /data/test.out

Don't forget to create the database test before this.


When the server is running it will create additional files such as .log.pid, etc.
You don't need them to be committed!
Don't forget to create the .gitignore file:


Static Files

You might have some problems with serving static files in production only with Gunicorn, so check this out.
Don't forget to create an empty folder /static/ in your project root directory with file __init__.py to serve static from it.
With this schema, your settings.py file should include:

STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'static')

And, to serve static with Gunicorn on production add this to the end of urls.py:

    urlpatterns += patterns('', (r'^static/(?P<path>.*)$', 'django.views.static.serve', {'document_root': settings.STATIC_ROOT}), )

Otherwise, you may use additional services to serve static files, for example, to use NGINX server.

The End

Please don't be shy to share your experience of organizing the development process in your team
and the best practices of working with Docker and Vagrant.
Drop me some feedback at:
andrii@preply.com or facebook.com/dvoyak

Thank you!

The Cloud Zone is brought to you in partnership with Internap. Read Bare-Metal Cloud 101 to learn about bare-metal cloud and how it has emerged as a way to complement virtualized services.

docker,vagrant,dev ops,microservice architecture,django,celery,postgresql,gunicorn,supervisord

Published at DZone with permission of Andrii Dvoiak. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}