Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Journey to Containers - Part II

DZone's Guide to

Journey to Containers - Part II

In this one, we continue with Docker images and containers.

· Cloud Zone ·
Free Resource

Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.

This is second part and continuation of the first article “Journey to Containers - Part I.” Please make sure you read Part I to correlate what we are going to do in Part II.

In this section, we are going to package the Python application from Part I inside a Docker image and then run the application as a container in a standalone Docker environment.

As you know, we have 2 components in this application, a web application (webpage) and a database (RedisDB) which keeps track of web page visits. We’ll be running both of these components as containers and then, using Docker's provided network, we’ll connect both containers so they can talk to each other.

In order to package an application as a Docker image, we have to create a "Dockerfile." A Dockerfile is a standard way to create Docker images; these images will be portable across the Linux platform.

At the high level, a Dockerfile is similar to the application manifest file that provides all requirements for your application along with the application code to package and configurations, environment variables, entrypoint, or command to run for the application to start when the container comes up, etc.

In many cases, to package your application you’ll start with the “parent image.” The parent image is the image which acts as the starting point (remember the lightweight VM?) to add your application on top. Every instruction within a Dockerfile adds additional layers to this parent image, thereby finally creating your "application image." For most of the applications, these parents images would be some kind OS image such as Ubuntu, Debian, Rhel, etc., however, in other cases parent image could be an additional software added on top of the OS image, like OpenJDK image or Python image. OS images normally don't have any parent images because those are typically created from scratch. Such images are called “base images.”

Refer to the Docker link for more information on “parent” and “base” image terminologies.

In the container world, it is very critical to make sure that you follow best practices while building your images in order to keep the image size small as far as possible. The main reason behind it is to limit the area of exposure from external attacks, minimize time to spin up the container, space-saving, etc.

Let’s work on the application now and create an application image for our Python web app from part I. Make sure you understand how the application needs to be installed locally because it is going to help us to write Dockerfile instructions.

Package Application as Docker Image and Run It in The Browser

  1. Make sure you have 2 files created from Part I in the current application directory app.py requirements.txt
  2. Edit app.py and change the Redis host name from “localhost” to “redis”. This change is needed as we’ll be using redis container as oppose to running it on localhost. See below change.
# Connect to Redis
redis=Redis(host="redis", db=0, socket_connect_timeout=2, socket_timeout=2)


3. Create file with name “Dockerfile” in the same directory alongside source code. While creating Dockerfile, you should follow the guidelines provided and it is necessary to follow best practices in order to keep image size as small as possible.

Open the editor and add the below instructions in Dockerfile. I have provided comments wherever needed to make instructions self explanatory. Also now that we know all the requirements to install and run the application, creating the Dockerfile is straightforward for this application.

# Use official python image from dockerhub. This acts as Parent image for
# our application
FROM python:2.7-slim

# Add Label to provide developer or team who is owner of this image 
LABEL MAINTAINER developer_name

# Set the working dir for application. This will be the default directory 
# when container comes up
WORKDIR /app

# Copy source code and required files to working directory location. 
# Based on application you may need to have multiple COPY instructions.
# For this appliaction we just need to files to package
COPY app.py requirements.txt /app/

# Install packages required by the application. Note that this is the same 
# command we executed in Part I
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Expose port 9000 outside application so that it can be accessed 
# outside container
EXPOSE 9000

# NAME environment variable from code is overridden here 
# so value can be externalized. If this variable is not used in Dockerfile, 
# default value will be picked up from application code
ENV NAME Hello from Dockerfile

# BGCOLOR environment variable from code is overridden here 
# so value can be externalized. If this variable is not used in Dockerfile, 
# default value will be picked up from application code
ENV BGCOLOR blue


# Run this command when container launches
CMD ["python","app.py"]


4. Build an image with the proper name so that you can identify what the image is for. I am using the name python-webapp and tag 1.0.0. The tag indicates the version of image.

Note: Make sure you are in the application directory when creating image.

Execute this Docker command on the prompt:

$docker build -t python-webapp:1.0.0 .


Output:

Sending build context to Docker daemon   5.12kB
Step 1/9 : FROM python:2.7-slim
 ---> 804b0a01ea83
Step 2/9 : LABEL MAINTAINER developer_name
 ---> Running in 02741a734812
Removing intermediate container 02741a734812
 ---> edf1e1d00500
Step 3/9 : WORKDIR /app
Removing intermediate container 6f78b7e2bf6a
 ---> 42b1441bcf88
Step 4/9 : COPY app.py requirements.txt /app/
 ---> 4ecc74631d5c
Step 5/9 : RUN pip install --trusted-host pypi.python.org -r requirements.txt
 ---> Running in 7bb50e7d36e7
Collecting Flask (from -r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
Collecting Redis (from -r requirements.txt (line 2))
  Downloading https://files.pythonhosted.org/packages/f5/00/5253aff5e747faf10d8ceb35fb5569b848cde2fdc13685d42fcf63118bbc/redis-3.0.1-py2.py3-none-any.whl (61kB)
Collecting itsdangerous>=0.24 (from Flask->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/76/ae/44b03b253d6fade317f32c24d100b3b35c2239807046a4c953c7b89fa49e/itsdangerous-1.1.0-py2.py3-none-any.whl
Collecting Jinja2>=2.10 (from Flask->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
Collecting Werkzeug>=0.14 (from Flask->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
Collecting click>=5.1 (from Flask->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/fa/37/45185cb5abbc30d7257104c434fe0b07e5a195a6847506c074527aa599ec/Click-7.0-py2.py3-none-any.whl (81kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->Flask->-r requirements.txt (line 1))
  Downloading https://files.pythonhosted.org/packages/bc/3a/6bfd7b4b202fa33bdda8e4e3d3acc719f381fd730f9a0e7c5f34e845bd4d/MarkupSafe-1.1.0-cp27-cp27mu-manylinux1_x86_64.whl
Installing collected packages: itsdangerous, MarkupSafe, Jinja2, Werkzeug, click, Flask, Redis
Successfully installed Flask-1.0.2 Jinja2-2.10 MarkupSafe-1.1.0 Redis-3.0.1 Werkzeug-0.14.1 click-7.0 itsdangerous-1.1.0
Removing intermediate container 7bb50e7d36e7
 ---> 110116035c9c
Step 6/9 : EXPOSE 9000
 ---> Running in f79dacc2fbc9
Removing intermediate container f79dacc2fbc9
 ---> 5bf7d7ef0a6b
Step 7/9 : ENV NAME Hello from Dockerfile
 ---> Running in 79fd3b5ce635
Removing intermediate container 79fd3b5ce635
 ---> 63ee5ff7dcec
Step 8/9 : ENV BGCOLOR blue
 ---> Running in 274794dc3cbc
Removing intermediate container 274794dc3cbc
 ---> 801b187f12b2
Step 9/9 : CMD ["python","app.py"]
 ---> Running in 68e015aab39b
Removing intermediate container 68e015aab39b
 ---> f16eb149653d
Successfully built f16eb149653d
Successfully tagged python-webapp:1.0.0

Remember the build context ( first line from output) is all files from your current directory which are used in the image creation process.

5. Execute “ docker image ls ” to make sure python-webapp:1.0.0 image is available

Output:

REPOSITORY      TAG   IMAGE ID        CREATED         SIZE
python-webapp 1.0.0 f16eb149653d    3 minutes ago     131MB


6. Pull the official redis:5.0 image from Dockerhub. This will be our DB image.

Execute below command to pull down image:
$docker pull redis:5.0

Output:
5.0: Pulling from library/redis
Digest: sha256:19f4621c085cb7df955f30616e7bf573e508924cff515027c1dd041f152bb1b6
Status: Downloaded newer image for redis:5.0


7. Execute “ docker image ls ” to ensure we have both images now available:

Output:
REPOSITORY      TAG      IMAGE ID      CREATED             SIZE
python-webapp  1.0.0   f16eb149653d   10 minutes ago      131MB
redis          5.0     c188f257942c   2 days ago          94.9MB


8. Create a Docker network with $docker network create my-network

This network will be used to run both containers so that both containers can talk to each other.

The output of this would be network id

Output:
30d3d7c7f23ebd7f5b1e6a3a59cfb2fd85ac248f868f569eaa941762329e3829

Validate that network is created using network ls command:

$docker network ls

Output:
NETWORK ID          NAME         DRIVER          SCOPE
34ddece81968        bridge        bridge         local
840c61ba74c7        host          host           local
30d3d7c7f23e        my-network    bridge         local
16cb3498fab2        none          null           local


9. Start the Redis container

While starting the RedisDB container make sure to use above created network “my-network”

$docker run --name redis --net my-network -d redis:5.0

Output:
719b41b4bebae578e2af3b1ca033f309d1d1ac5b170c43f22431842e98442eb1

Validate that Redis container is up and running:

$docker container ls

Output:
CONTAINER ID  IMAGE      COMMAND               CREATED          STATUS     PORTS    NAMES
719b41b4beba  redis:5.0 "docker-entrypoint.s…" 2 minutes ago  Up 2 minutes 6379/tcp redis


Check logs to make sure there are no errors:

$docker logs redis

Output:
1:C 18 Nov 2018 18:45:14.125 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1:C 18 Nov 2018 18:45:14.125 # Redis version=5.0.1, bits=64, commit=00000000, modified=0, pid=1, just started
1:C 18 Nov 2018 18:45:14.125 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf
1:M 18 Nov 2018 18:45:14.127 * Running mode=standalone, port=6379.
1:M 18 Nov 2018 18:45:14.127 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
1:M 18 Nov 2018 18:45:14.127 # Server initialized
1:M 18 Nov 2018 18:45:14.128 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
1:M 18 Nov 2018 18:45:14.128 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
1:M 18 Nov 2018 18:45:14.128 * Ready to accept connections


10. Now start the application container

Again make sure to use “my-network” while starting container.

$docker run -d --name python-webapp --net my-network -p9000:9000 python-webapp:1.0.0

Output:
f29f79f30a936cf81cc0aa6df9443475c4d191ba39b7d03abb0446ba9cfdffe1

The above command indicates to attach container named python-webapp to network “my-network” and publish ( -p ) container port 9000 and map to host port 9000. Host port could be different than 9000 as well.

e.g. You could run below command instead of the one above and change the port -

$docker run -d --name python-webapp --net my-network -p80:9000 python-webapp:1.0.0

Validate that application container is up and running -

$docker container ls

Output:
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS              PORTS                    NAMES
f29f79f30a93        python-webapp:1.0.0   "python app.py"          3 minutes ago       Up 3 minutes        0.0.0.0:9000->9000/tcp   python-webapp
719b41b4beba        redis:5.0             "docker-entrypoint.s…"   14 minutes ago      Up 14 minutes       6379/tcp                 redis

Check logs to make sure there are no errors and web server is ready to process requests on port 9000.

$docker logs python-webapp

Output:

* Serving Flask app "app" (lazy loading)
 * Environment: production
   WARNING: Do not use the development server in a production environment.
   Use a production WSGI server instead.
 * Debug mode: off
 * Running on http://0.0.0.0:9000/ (Press CTRL+C to quit)


11. Go to the bowser and access http://localhost:9000.


Image title

As you see, the background color is “Blue” which is set up in Dockerfile and same for the text in the webpage. The application is successfully connecting to Redis database and hence visits counter indicates number of visits. As you refresh the webpage, visits counter increments further.

Another interesting piece of information is that the hostname is the same as the application container id “f29f79f30a93” which indicates the application is running inside the container.

These 2 images can be used on any Linux distro and the application will come up seamlessly. That's the power of Docker. It gives a self-contained environment for the application and there is no dependency with any of the host components. This application will now run exactly the same on any of the Linux distros. Images cannot be edited once built. Every image will have its own sha256 code. Any changes to images will change this code.

In addition, you can spin up multiple instances of this application as long as exposed ports on the network, host and container names are unique. You can try it yourself as an exercise to create multiple instances of this application.

Something to keep in mind here is that we are running the application in a standalone Docker instance. We have just 2 containers and so it is easy to manage them.

As your number of application containers grows and communication within application becomes complex, it would be cumbersome to use standalone Docker instance for running your applications. What you really need is a container orchestration platform which will allow you to easily manage various containers, their lifecycle, self-healing capability, managed networking, etc. That is why Docker provides something called Docker Swarm. These are group of machines connected together to form a cluster. There are various other container orchestration platform available among which Kubernetes emerged as the de facto standard for container orchestration.

In the next section, we’ll use Kubernetes as orchestration platform for running our application.

Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.

Topics:
contaienrs ,docker ,kubernetes ,devops ,container orchestartion ,container

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}