Running Test Environments in Docker With Go
How to set up a test environment for a Go application by using Docker in a continuous delivery environment.
Join the DZone community and get the full member experience.
Join For FreeIn the last 5 years a lot has changed thanks to technologies that can build isolated test/development environment. However, to establish stable test environment is not a simple task. And if you need to test network component interaction and to analyze the ultimate load level, then the task seems to be even more difficult. By adding the possibility of fast environment deployment and flexible setup of some components, we can get a small and interesting project.
In this article we are going to tell about Docker test environment of our client-server application. At the same time this article is going to be a good illustration of using Docker containers and its related ecosystem.
Problem Statement
Our application collect, analyze and keep all kinds of log-files. The main goal of the environment is to perform primary load testing.
So, the situation is as follows:
- Our service is written in Go and it also has client-server architecture.
- Service can write data in multiple storages so that multiple worker instances can read your data in parallel. This is very important for building test environment.
- Developers need a possibility to troubleshoot test environment fast and safely.
- We have to test network component interaction in distributed environment on several network nodes. For that we need to analyze.
- Traffic flow between clients and servers.
- We need to control resources consumption and make sure daemon remains stable under high load.
- And, of course, we want to see all possible metrics in real time and based on testing results.
As a result, we decided to build Docker test environment and its related technologies. This allows us to fulfill our requests and to effectively use hardware resources without necessity to buy separate server for each separate component. In this case hardware resources can be: a separate server, a set of servers or even a developer’s laptop.
Test Environment Architecture
For a start let’s consider main architecture’s components:
- Arbitrary quantity of server instances of our application.
- Arbitrary quantity of agents.
- Separate environments with data storages such as: ElasticSearch, MySQL or PostgreSQL.
- Load generator (we have implemented simple stress-generator, but it is possible to use any other, for example, Yandex.Tank or Apache Benchmark).
Test environment should be easy to scale and support.
We have built distributed network environment with the help of Docker containers, which isolate internal and external services, and docker-machine, that allows establishing isolated test environment. As a result test environment architecture looks like:
For environment visualization we use Weave Scope, because it’s a very convenient and clearly arranged service for Docker containers monitoring.
With the given approach it is convenient to test SOA components interaction, for example, small client-server applications, like ours.
Basic Environment Establishment
Now let’s consider in depth every step of test environment establishment based on Docker containers, by using docker-compose and docker-machine.
Let’s start with docker-machine, which will allow us to identify test virtual environment. It’s very easy to work with this environment directly from host-system.
Now, let’s create test machine:
$ docker-machine create -d virtualbox testenv
Creating VirtualBox VM...
Creating SSH key...
Starting VirtualBox VM...
Starting VM...
To see how to connect Docker to this machine, run: docker-machine env testenv
This command establish VirtualBox VM with installed CoreOS and Docker. (If you work on Windows or MacOS, then it is recommended to install Docker Toolbox, which already has it. If you work on Linux, then you have to install docker, docker-machine, docker-compose and VirtualBox manually). We recommend learning more about different possibilities of docker-machine, because it’s a powerful tool for environment management.
As we can see from this command, docker-machine establishes all required components for working with virtual machine. After establishing, virtual machine is started and ready for work. Let’s check it:
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
testenv virtualbox Running tcp://192.168.99.101:2376
Once we've started up the virtual machine, we need to activate access in the current session. Let’s get to the previous step and carefully look at the last line:
To see how to connect Docker to this machine, run: docker-machine env testenv
This is auto setup for our session. By running this command we’ll get the following:
$ docker-machine env testenv
export DOCKER_TLS_VERIFY="1"
export DOCKER_HOST="tcp://192.168.99.101:2376"
export DOCKER_CERT_PATH="/Users/logpacker/.docker/machine/machines/testenv"
export DOCKER_MACHINE_NAME="testenv"
# Run this command to configure your shell:
# eval "$(docker-machine env testenv)"
This is just a set of environment variables, which will report to your local docker-client the way to find server. The last line has a hint. Let’s run the command and see ls output:
$ eval "$(docker-machine env testenv)"
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
testenv * virtualbox Running tcp://192.168.99.101:2376
In ACTIVE column our active machine is denoted with an asterisk. Note that machine is active only in terms of current session. We can open another terminal window and to activate another machine there. This can be convenient for orchestration testing with the help of Swarm. Anyway, it’s a topic for a separate article :)
Now, let’s check our docker-server:
$ docker info
docker version
Client:
Version: 1.8.0
API version: 1.20
Go version: go1.4.2
Git commit: 0d03096
Built: Tue Aug 11 17:17:40 UTC 2015
OS/Arch: darwin/amd64
Server:
Version: 1.9.1
API version: 1.21
Go version: go1.4.3
Git commit: a34a1d5
Built: Fri Nov 20 17:56:04 UTC 2015
OS/Arch: linux/amd64
Note OS/Arch, it always has linux/amd64, because docker-server works on VM, don’t forget about it.
Let’s step back and look inside of the VM:
$ docker-machine ssh testenv
## .
## ## ## ==
## ## ## ## ## ===
/"""""""""""""""""\___/ ===
~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ / ===- ~~~
\______ o __/
\ \ __/
\____\_______/
_ _ ____ _ _
| |__ ___ ___ | |_|___ \ __| | ___ ___| | _____ _ __
| '_ \ / _ \ / _ \| __| __) / _` |/ _ \ / __| |/ / _ \ '__|
| |_) | (_) | (_) | |_ / __/ (_| | (_) | (__| < __/ |
|_.__/ \___/ \___/ \__|_____\__,_|\___/ \___|_|\_\___|_|
Boot2Docker version 1.9.1, build master : cef800b - Fri Nov 20 19:33:59 UTC 2015
Docker version 1.9.1, build a34a1d5
docker@testenv:~$
This is boot2docker, but our subject of interest is different. Let’s look at mounted partitions:
docker@testenv:~$ mount
tmpfs on / type tmpfs (rw,relatime,size=918088k)
proc on /proc type proc (rw,relatime)
sysfs on /sys type sysfs (rw,relatime)
devpts on /dev/pts type devpts (rw,relatime,mode=600,ptmxmode=000)
tmpfs on /dev/shm type tmpfs (rw,relatime)
fusectl on /sys/fs/fuse/connections type fusectl (rw,relatime)
/dev/sda1 on /mnt/sda1 type ext4 (rw,relatime,data=ordered)
[... cgroup skipped ...]
none on /Users type vboxsf (rw,nodev,relatime)
/dev/sda1 on /mnt/sda1/var/lib/docker/aufs type ext4 (rw,relatime,data=ordered)
docker@testenv:~$ ls /Users/
Shared/ logpacker/
docker@testenv:~$
In this case we use MacOS, as inside the machine there is mounted directory/ Users (analog /home in linux). This allows us to work transparently with files on host-system in terms of docker, so that we can easily turn on and turn off volumes, and don’t care about VM layer. This is very efficient. In theory we can forget about VM, we need it only for docker to work in “native” environment. In this case using docker-client will be absolutely clear. Basic environment is ready, and now we have to run Docker containers.
Setting Up and Running Containers
Our application can work on the principle of cluster; it means that it provides system fault tolerance in case of changing number of nodes. Thanks to internal service API cluster node addition and deletion don’t require other nodes restart. We need to take into account this special feature while building environment.
All in all, it meets the Docker ideology: “one process - one container”. That’s why we decided to follow this approach. First we need to run the following configuration:
- Three containers with server application part.
- Three containers with client application part.
- Load generator for each agent. For example, we will use Yandex.Tank and Apache Benchmark with Ngnix, which will generate logs.
- Our service can work in “dual mode”, in other words client and server are located on the same host. Moreover, it is the only application instance, which is working as client and server. We will run it in the container under supervisord control, and in the same container we will run our own load generator as a main process.
So we have our own application executable – it’s only one file, thanks to Golang, with the help of which we can create a universal container for service running in terms of test environment. There are some nuances in the last step when running service in “dual mode”. More details on that we’ll provide a bit later.
Now, we are preparing docker.compose.yml. This is a file with directives for docker-compose, which will allow us to up test environment by only one command:
# external services
elastic:
image: elasticsearch
ngx_1:
image: nginx
volumes:
- /var/log/nginx
ngx_2:
image: nginx
volumes:
- /var/log/nginx
ngx_3:
image: nginx
volumes:
- /var/log/nginx
# lp servers
lp_server_1:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p=0.0.0.0:9995"
links:
- elastic
expose:
- "9995"
- "9998"
- "9999"
lp_server_2:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p=0.0.0.0:9995"
links:
- elastic
- lp_server_1
expose:
- "9995"
- "9998"
- "9999"
lp_server_3:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -s -v -devmode -p=0.0.0.0:9995"
links:
- elastic
- lp_server_1
- lp_server_2
expose:
- "9995"
- "9998"
- "9999"
# lp agents
lp_agent_1:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p=0.0.0.0:9995"
volumes_from:
- ngx_1
links:
- lp_server_1
lp_agent_2:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p=0.0.0.0:9995"
volumes_from:
- ngx_2
links:
- lp_server_1
lp_agent_3:
image: logpacker_service
command: bash -c "cd /opt/logpacker && ./logpacker -a -v -devmode -p=0.0.0.0:9995"
volumes_from:
- ngx_3
links:
- lp_server_1
This file is standard. First, we run elasticsearch as a main storage, and then we run three instances with nginx, that will perform as load distributors. After that we run server-applications. Note that all upcoming containers are linked with previous ones. In terms of docker-network it allows us to call containers by name. When we review establishment of our service in “dual mode”, we will come back to this issue and take a closer look on it. First container that has server-application instance is linked with agents. It means that all three agents will send logs to this particular server.
Our application is developed in such a way that for adding a new node to the cluster, agent or server needs to report about one existing cluster node. And it will get full information about its system. In configuration files for each server instance we’ll indicate the first node, so that agents automatically can get full information about current system’s status. Eventually after running all system nodes, we will stop that instance. In our case cluster is safe, all system information is distributed between all stakeholders. You should also pay attention to the way of volumes mounting. On nginx containers we indicate manifest volume, which will be accessible in docker-network, while on agent containers we just connect it by indicating server’s name. As a result we get shared volume between load recipients and providers.
Let’s run our environment:
$ docker-compose up -d
Let’s check that everything is working properly:
$ docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------
assets_lp_agent_1_1 bash -c cd /opt/logpacker ... Up
assets_lp_agent_2_1 bash -c cd /opt/logpacker ... Up
assets_lp_agent_3_1 bash -c cd /opt/logpacker ... Up
assets_lp_server_1_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_lp_server_2_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_lp_server_3_1 bash -c cd /opt/logpacker ... Up 9995/tcp, 9998/tcp, 9999/tcp
assets_ngx_1_1 nginx -g daemon off; Up 443/tcp, 80/tcp
assets_ngx_2_1 nginx -g daemon off; Up 443/tcp, 80/tcp
assets_ngx_3_1 nginx -g daemon off; Up 443/tcp, 80/tcp
elastic /docker-entrypoint.sh elas ... Up 9200/tcp, 9300/tcp
That’s great, environment is up, works ok and all ports are forwarding. In theory we can start testing, but there are still some unfinished moments.
Naming Containers
Let’s turn back to the container in which we wanted to run our application in “dual mode”. The main process in this container will be load generator (simple shell-script). It generates text lines and collects them in text “log” files, which will perform as load for our application. First we need to build a container with our application, activated under supervisord. Let’s take the last version of supervisord as we need the possibility to pass environment variables to configuration file. We took Ubuntu 14.04 LTS as a base image. supervisord3.2.0 will be good enough, but in Ubuntu 14.04 LTS supervisord version (3.0b2) is old. Let’s install recent version of supervisord through pip.
Final Dockerfile looks like:
FROM ubuntu:14.04
# Setup locale environment variables
RUN locale-gen en_US.UTF-8
ENV LANG en_US.UTF-8
ENV LANGUAGE en_US:en
ENV LC_ALL en_US.UTF-8
# Ignore interactive
ENV DEBIAN_FRONTEND=noninteractive
RUN apt-get update && \
apt-get install -y wget unzip curl python-pip
# Install supervisor via pip for latest version
RUN pip install supervisor
RUN mkdir -p /opt/logpacker
ADD final/logpacker /opt/logpacker/logpacker
ADD supervisord-logpacker-server.ini /etc/supervisor/conf.d/logpacker.conf
ADD supervisor.conf /etc/supervisor/supervisor.conf
# Load generator
ADD random.sh /opt/random.sh
# Start script
ADD lp_service_start.sh /opt/lp_service_start.sh
Load generator is quite simple:
#!/bin/bash
# generate random lines
OUTPUT_FILE="test.log"
while true
do
_RND_LENGTH=`awk -v min=1 -v max=100 'BEGIN{srand(); print int(min+rand()*(max-min+1))}'`
_RND=$(( ( RANDOM % 100 ) + 1 ))
_A="[$RANDOM-$_RND] $(dd if=/dev/urandom bs=$_RND_LENGTH count=1 2>/dev/null | base64 | tr = d)";
echo $_A;
echo $_A >> /tmp/logpacker/lptest.$_RND.$OUTPUT_FILE;
done
Initial script is simple, too:
#!/bin/bash
# run daemon
supervisord -c /etc/supervisor/supervisor.conf
# launch randomizer
/opt/random.sh
The trick will be in supervisord configuration file and docker-container start.
Let’s examine configuration file:
[program:logpacker_daemon]
command=/opt/logpacker/logpacker %(ENV_LOGPACKER_OPTS)s
directory=/opt/logpacker/
autostart=true
autorestart=true
startretries=10
stderr_logfile=/var/log/logpacker.stderr.log
stdout_logfile=/var/log/logpacker.stdout.log
Pay your attention to %(ENV_LOGPACKER_OPTS)s. Supervisord can make substitutions to the configuration file from environment variables. Variable is written as %(ENV_VAR_NAME)sand its meaning substitutes to configuration file at daemon’s run.
$ docker run -it -d --name=dualmode --link=elastic -e 'LOGPACKER_OPTS=-s -a -v -devmode' logpacker_dualmode /opt/random.sh
With the help of key –e there is a possibility to install environment variable. It will be installed globally inside the container. We substitute this exact variable to supervisordconfiguration file. So that we can manage start keys for our daemon and run it in required mode.
We have got universal mode, even it doesn’t totally meet relevant ideology. Let’s look inside:
$ docker exec -it dualmode bash
$ env
HOSTNAME=6b2a2ae3ed83
ELASTIC_NAME=/suspicious_dubinsky/elastic
TERM=xterm
ELASTIC_ENV_CA_CERTIFICATES_JAVA_VERSION=20140324
LOGPACKER_OPTS=-s -a -v -devmode
ELASTIC_ENV_JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
ELASTIC_ENV_JAVA_VERSION=8u66
ELASTIC_ENV_ELASTICSEARCH_REPO_BASE=http://packages.elasticsearch.org/elasticsearch/1.7/debian
ELASTIC_PORT_9200_TCP=tcp://172.17.0.2:9200
ELASTIC_ENV_ELASTICSEARCH_VERSION=1.7.4
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
ELASTIC_PORT_9300_TCP_ADDR=172.17.0.2
ELASTIC_ENV_ELASTICSEARCH_MAJOR=1.7
ELASTIC_PORT_9300_TCP=tcp://172.17.0.2:9300
PWD=/
ELASTIC_PORT_9200_TCP_ADDR=172.17.0.2
ELASTIC_PORT_9200_TCP_PROTO=tcp
ELASTIC_PORT_9300_TCP_PORT=9300
SHLVL=1
HOME=/root
ELASTIC_ENV_JAVA_DEBIAN_VERSION=8u66-b17-1~bpo8+1
ELASTIC_PORT_9300_TCP_PROTO=tcp
ELASTIC_PORT=tcp://172.17.0.2:9200
LESSOPEN=| /usr/bin/lesspipe %s
ELASTIC_ENV_LANG=C.UTF-8
LESSCLOSE=/usr/bin/lesspipe %s %s
ELASTIC_PORT_9200_TCP_PORT=9200
_=/usr/bin/env
Besides our variable, which we indicate at container start, we also see all other variables related to the linked container, namely: IP-address, all open ports and all variables that were definitely installed at elasticsearch build with the help of ENV directive. All variables have prefix with the exported container name and the title indicating their nature. For example, ELASTIC_PORT_9300_TCP_ADDR means that variable has a meaning indicating container by name elastic and its ip-address with open port 9300. Even though it’s not reasonable to scale discovery-service for setting tasks, this is a great option to get IP-address and data of linked containers. There is also a possibility to use them in your applications that are activated in Docker containers.
Containers Managing and Monitoring System
We have built test environment, which meets all our initial requirements. There are only a couple of nuances left. First of all, let’s install Weave Scope (screenshots you can find in the beggining of the article). With the help of Weave Scope you can visualize the environment we work in. Besides the wires display and containers information, we can attach any container and run full terminal with sh in browser. These functions are extremely important for debugging and testing. In terms of our active session we need to use the following command from host-machine:
$ wget -O scope https://github.com/weaveworks/scope/releases/download/latest_release/scope
$ chmod +x scope
$ scope launch
After command execution, go to http://VM_IP:4040 . This is interface for container managing:
That’s great. Almost everything is ready. The only thing left is monitoring system. Let’s use cAdvisor by Google:
$ docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8080:8080 --detach=true --name=cadvisor google/cadvisor:latest
Here http://VM_IP:8080 we have resource monitoring system in real time. We can monitor and analyze the main metrics of our environment such as:
- Using system resources.
- Network load.
- Task list.
- Other useful information.
cAdvisor interface is presented in the picture below:
Conclusion
By using Docker containers, we have built fully-featured test environment with automatic deployment functions and network nodes interaction, and what is more important, with flexible setting of each component and system in total.
All main requirements are accomplished:
- Full network emulation for network interaction testing.
- Nodes adding and removal is accomplished by changes in docker-compose.yml and is operated with the help of one command.
- All nodes can get full information about network environment.
- Storages adding and removal is operated with the help of one command.
- System monitoring and managing is possible with browser. It is done with the help of instruments, individually set in containers close to our application that allows us to isolate them from host-system.
Link to all instruments mentioned in the article:
Opinions expressed by DZone contributors are their own.
Trending
-
A Complete Guide to AWS File Handling and How It Is Revolutionizing Cloud Storage
-
How To Backup and Restore a PostgreSQL Database
-
13 Impressive Ways To Improve the Developer’s Experience by Using AI
-
Observability Architecture: Financial Payments Introduction
Comments