Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

8 Lessons Learned Using docker-compose

DZone's Guide to

8 Lessons Learned Using docker-compose

docker-compose provides a way to automate the setup and configuration of services. Here are a few tips for using it easily and securely.

· Integration Zone ·
Free Resource

SnapLogic is the leading self-service enterprise-grade integration platform. Download the 2018 GartnerMagic Quadrant for Enterprise iPaaS or play around on the platform, risk free, for 30 days.

Nowadays when people setup and configure services, nobody will enjoy doing it in a manual way.

Here comes a new question. How do you automate the process, and make it fast and reliable? Wrap up some ssh scripts? Leverage CM(configuration management tools) like Chef, Ansible, or Puppet? Use Docker run? It’s great that we now have many options. But, as you may guess, not all of them are equally good.

You want a headache-free solution, right? You also want it real quick, don't you? Then you cannot miss docker-compose! Here are some useful tips and lessons learned using docker-compose.

I’m not here to argue with you that docker-compose is the best for all scenarios. There are definitely some which we cannot or should not Dockerize, but I still insist that it’s capable for most cases.

CM is good, but Docker/docker-compose is just better for environment setup. Not convinced, my friends? Give it a try and challenge me now!

With the plain Docker facility, we can setup environments like below:

 docker run --rm -p 8080:8080 \
 -h mytest --name my-test\
 -v /var/run/docker.sock:/var/run/docker.sock \
 --group-add=$(stat -c %g /var/run/docker.sock) \
 jenkinsci/docker-workflow-demo 

But… the command is just too long, isn’t it? What if you need to start multiple different containers? You could easily mess it up, right?

It's show time for docker-compose, an advanced version of docker run.

Just one single command, docker-compose up -d! We can run the same deployment process against another new environment within just a few seconds. Beautiful, isn’t it?

Image title

Here are my lessons learned using docker-compose.

1. Infrastructure-as-Code: Host All Setup and Configuration Logic in Git Repo

Host all necessary things in Git repo: docker-compose.yml, .env and Docker volume files.

  • Everything you need can and should be found in Git repo.
  • People can easily review and comment your changes via PRs.
  • You can audit the change history and understand issues.

2. Port Mapping: Default Docker Port Mapping Is Dangerous for the Public Cloud

You may get surprised how insecure the Docker port mapping feature is! Let me show you.

Imagine you have a MySQL instance running with the below docker-compose.yml.

version: '2'
services:
  db:
    container_name: db
    image: mysql
    ports:
      - "3306:3306"
    volumes:
     - db_home:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: rootpassword
      MYSQL_USER: user1
      MYSQL_PASSWORD: password1
      MYSQL_DATABASE: db1

volumes:
  db_home:

With the default setting, anyone from the internet can access your db, port 3306.

root@denny:# docker-compose up -d
root@denny:# iptables -L -n | grep 3306
ACCEPT  tcp  --  anywhere  172.17.0.2  tcp dpt:3306
root@denny:# telnet $YOUR_IP 3306

Why? Docker adds iptables rules widely open FOR YOU. Sweet, isn’t it? Anywhere can access them.

Let’s limit the source IP for the access. Good idea! Unfortunately, it’s impossible. That rule is added by Docker. To make it worse, it provides no hook points for us to change this behavior.

With some more thinking, you may think: how about I delete the rules created by Docker, and add a new rule? I’m an iptable expert, it’s easy. Well, it’s not easy. The tricky part is your customized iptable rules won’t be recognized and managed by Docker. It would easily mess up when the service/machine reboots. Especially when we restart container, the IP will change.

So instead of default port mapping, I only bound the port mapping to a specific IP address, like below.

 ports: - "127.0.0.1:3306:3306" 

3. Data Volume: Separate Data From the Application Using Docker Volume

Make sure the containers don't hold any unrecoverable application data. We can safely recreate Docker environments, or even migrate to another environment, completely and easily.

docker-compose can mount local folders and named volumes:

  db:
    container_name: db
    image: mysql
    networks:
      - network_application
    volumes:
    # Named volumes
    - db_home:/var/lib/mysql
    # Local folders
    - ./scripts:/var/lib/mysql/scripts

volumes:
  db_home:

docker-compose can overwrite an existing file:

services:
  shadowsock:
    image: denny/ss:v1
    ports:
      - "6187:6187"
    volumes:
      - ./shadowsock.json:/etc/shadowsocks.json:rw
    entrypoint: ["/usr/bin/supervisord", "-n", "-c", "/etc/supervisor/supervisord.conf"]


4. Migration Rehearsal: Run Rehearsal for the docker-compose Migration

It’s always a good idea to run migration rehearsal.

Ideally, there should be no more than 3 steps:

  • scp data volume from old environment to new environment.
  • Install docker-compose and run  docker-compose up -d .
  • Very few manual steps, mostly it’s about credentials.

5. Backup: Enforce Weekly Backup for Data Volumes

All critical data would be in volumes only. To backup the system, we just need to backup the folders of data volumes.

When the data volumes are not very big, we can

  • Enforce periodical folder backup for data volumes, like a weekly backup.
  • Reduce out-of-disk issues by remembering to rotate very old backup sets.
  • Avoid local VM failures with a backup to a remote VMs or AWS S3.

6. Docker Image: Build Your Own Docker Image

During deployment, any request to external services is a failure point.

To make sure the deployment is more smoothly, I always build my own docker images pre-download packages/files, if necessary.

7. Monitoring: Use Docker Healthcheck and External SaaS Monitoring

Monitoring and alerting are good.

  • Define Docker Healthcheck; troubleshooting will be as simple as below:
docker-compose ps docker ps | grep unhealthy 


  • External monitoring is easy and useful.

Try uptimerobot.com. It runs a URL check/port check every five minutes. If the check has failed, we can easily get Slack or email notifications.

Didn't I mention, uptimerobot.com is totally free? I’m happy to be a loyal customers for more than five years.

8. Debugging: Change the Entrypoint in docker-compose.yml for Temporary Troubleshooting

For a complicated docker-compose environment, some services may fail to start, but when debugging the issues, it’s better to have the containers up and running.

I find it useful to replace the containers’ start logic with some dummy placeholder.

service1:
  container_name: service1
  image: someimage1
  entrypoint: ["sleep", "10000"]
  environment:
  ...
  ...


With SnapLogic’s integration platform you can save millions of dollars, increase integrator productivity by 5X, and reduce integration time to value by 90%. Sign up for our risk-free 30-day trial!

Topics:
integration ,docker ,docker-compose

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}