Running Multiple Services Inside a Single Container Using Supervisord

DZone 's Guide to

Running Multiple Services Inside a Single Container Using Supervisord

An explanation on how to use containers as an effective way to run multiple programs.

· Microservices Zone ·
Free Resource

Image title

In this blog, I am going to explain how to run multiple services inside a single container and how to effectively use Docker compose and persistent volume in a local development environment — using Supervisord and Docker compose. A container is a light-weight platform for running applications along with its dependencies in an isolated environment. It is good to run a single service inside a container.

Though we can access the different services hosted in different containers using a container network, we can get the same benefits by running multiple services in a single container. There are some situations where we need to run more than one service inside a container and it should be accessible from the container host or network. For example, apache/nginx http   server along with an FTP server or some microservices that are running inside the same container with different processes.

We can use different methods such as adding multiple commands in ENTRYPOINT  at the end of Dockerfile, by writing a wrapper script to spawn multiple services in a container with a unique process id. The above method has some limitations, such as increasing complexity when the number of services that have to be managed in a container increases, there will be dependency while starting multiple services.

Here, we can use a simple and user-friendly process manager like Supervisord to start, stop, and restart multiple services inside a container. Supervisord is a lightweight tool that can be easily deployed in a container. Its configuration is easy to maintain as the number of services to be handled by a supervisor. Supervisord can be accessed by a web interface, XML RPC interface, and command-line interface.

Supervisord Configuration File

I am going to deploy a simple multi-service application using docker-compose. Make sure docker-daemon is installed and running in your machine. Ensure docker-compose is installed in your machine.

Docker-compose is a CLI tool that helps you to define and deploy a multi-service containerized application environment. It is ideal for developing applications in your local environment without connecting to a remote host or a virtual machine.

It is used to deploy and scale the application in different environments such as development to the production environment. There are multiple advantages of using docker-compose to manage a multi-service application.

We can have multiple isolated environments on a single host, enable CI/CD for the environment, easy to debug and identify the software bugs.

We can build a specific service of an application, we can also get the benefits of using docker-network and persistent volume for accessing and storing data layer of an application.

Assuming we have a Dockerfile for all services to be created by docker-compose, create a docker-compose.yml   file and define the specification of the application. Once the file is created, execute docker-compose up in the same directory where the docker-compose file exists. It will read the definition and interacts with a docker-daemon to create the resources. It is recommended to run it as a daemon process bypassing the parameter -d.

Docker Compose File

In the above example, I have defined three services. The first service is app tier, the second service is data tier, and the third service is used to monitor the application performance using Grafana which uses influx DB service as Datasource.

Here, the app tier host different services such as metrics monitor, metrics collector and a WSGI service. It is controlled and managed by Supervisord.

When an app container is started, Supervisord will be started by ENTRYPOINT script, Supervisord then starts the remaining process.

Supervisord Web InterfaceDocker-compose takes care of provisioning the services in order and maintaining the dependencies, It exposes the service port as per the definition. A persistent volume is mounted on the container for storing database files, Grafana dashboards, and application code. Another benefit of using persistent volume for the development environment is that we do not need to build the service for every code change.

Say, if we are developing our application using flask in a debug mode, It detects the changes in source code and reruns the app without restarting. It avoids unnecessary builds and makes development much easier. We can directly use the repo path as a persistent volume mount path in a local environment.

Enjoy Learning!

devops ,docker ,container ,supervisord ,docker compose ,tutorial

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}