Journey to Containers - Part I
If you're new to Docker and containers, welcome. We'll help get you started.
Join the DZone community and get the full member experience.
Join For FreeAbout 6 months back, I received an opportunity to work with containers. Since then I've gone through a lot of documentation on containers, containers history and eventually, Docker and Kubernetes. There is lot of community work going on to promote container ecosystem. I am also mesmerized to see so many new tools and enhancements to existing tools are coming in the market to support container ecosystem mainly from the CI/CD, Security, Monitoring and Orchestration perspective.
I have been working in the DevOps space for a while and worked with various tools and technologies with major work in designing and setting up CI/CD pipelines with various tools, integrations, automations, defining processes, user trainings, and so on.
Setting up a pipeline for deploying containerized applications was a big challenge, to begin with. As I started going through various documentation to gain containers insight, I was overwhelmed with all the information available online. To get a better hold of this new adventure, I wanted to do a POC to deploy an application through CI/CD pipeline to Kubernetes with the goal to understand the intricacies, integrations, handoffs between tools and processes that a developer/DevOps as a user need to go through.
As I continue to work in Docker and Kubernetes, I also decided to contribute my learnings to the community as it could help newbies like me to get hands dirty with Docker and Kubernetes and begin their journey.
I am planning to divide these sessions into multiple sections with a focus on deploying applications through CI/CD pipeline, various configurations, a few points on security and audit controls, and some information on Docker/Kubernetes objects or architecture components.
Let's begin with exploring containers at a high level and then we’ll move towards Docker and Kubernetes.
In terms of a virtual machines analogy, containers can be considered as a very lightweight running instance of VM with no hypervisor usage. Every container is an independent unit and has its own behavior. In order to spin up a VM, you either need to provide instruction in the user interface step-by-step or use a template (image). In the same way, in order to create a container, an image is needed. The image includes set up instructions such as the operating system to use, software along with configurations, and the command to run when the container is created. This is also called an entrypoint. See the difference between VM and containers here.
Containers are not new; however, packaging applications within containers and delivering the same container without any underlying dependencies is something new and very well-accepted. The credit goes to technologies like Docker which brought containers to mainstream development. Docker provided a standard and easy way to create portable images with a file called “Dockerfile.” Docker also provided a complete eco-system which includes Docker engine, API, CLI and pluggable interfaces for networking, storage, monitoring and logging. It actually provided an improved and simplified way of building, testing, packaging and deploying software on both Linux and Windows environments.
For ease of understanding, I’ll be using a simple application that Docker has provided in the documentation here. There are minor modifications done to the application to add more configurations. The goal is to use this simple application and deploy all the way to Kubernetes to maintain the pipeline workflow and see how this simple application will transition through multiple stages and various configurations.
1. Create an Application Locally and Run It in The Browser
I am using Ubuntu OS with Docker version 18.03.1-ce. Hence, most of the instructions provided below are executed on Ubuntu. Instructions may vary for other Linux distributions.
Requirements to complete first session:
- Any of the Linux flavor OS (with access to the internet to download files)
- Docker host (server and Client)
- Visual code editor (optional) or preferred editor for ease of making edits
In order to migrate any application through the pipeline at the high level, the first thing we need to know is all the requirements to build the application, such as dependent libraries, build tools, configurations, and any other mandatory prerequisites such as the specific version of the OS (although most of the applications are platform independent). Second, we look at how to run and test the application after installation. The testing may not be full-fledged testing.
For the current application, here are the details:
- This is a web application written in Python
- It requires Python 2.7 or higher, a Redis database, and Flask framework
- The application is platform independent, however, we’ll be running this application in Ubuntu
- Requires pip (Python packager) for installing Flask and Redis packages
- As far as running the application, by default, it runs on port 8080. However it is configurable as an environment variable
- The Redis DB itself runs on port 6379 by default
- The web page can run on multiple hosts and connect to a single database instance of Redis DB
This application provides a hostname on which application runs and also keeps count of how many times a web page is accessed in the Redis database
As we see, this application is simple, but it provides 2 tiers and communicates with the database.
Let's bring up the application locally first (no Docker usage). Follow these steps :
1. Create file “app.py” and add below code. Comments have been added for the ease of understanding specific sections.
from flask import Flask
from redis import Redis, RedisError
import os
import socket
# Connect to Redis
redis=Redis(host="localhost", db=0, socket_connect_timeout=2, socket_timeout=2)
app = Flask(__name__)
@app.route("/")
def hello():
try:
# Increment counter on web page access / refresh
visits = redis.incr("counter")
except RedisError:
# If redis database is not connected below message will appear on the web page
visits = "<i>Can not connect to Redis, counter disabled</i>"
html = "<body bgcolor={bg_color}>" \
"<h3> {name} ! </h3>" \
"<b>Hostname:</b> {hostname}<br/>" \
"<b>Visits:</b> {visits}" \
"</body>"
# Environment properties
# bg_color -> Background color of webpage
# name -> User defined text that will come on the web page
# "hostname" extracted from system where application runs
# "visits" keeps track of counter to display number of times page is accessed
return html.format(bg_color=os.getenv("BGCOLOR","Green"),name=os.getenv("NAME","Hello Docker world"), hostname=socket.gethostname(),visits=visits)
# "port" on which application can be accessed. Make sure on local machine this port 9000 is available
if __name__ == "__main__":
app.run(host='0.0.0.0', port=9000)
2. Create “requirements.txt” and add below packages needed for application:
Flask
Redis
3. Install Flask and Redis using pip:
$sudo pip install --trusted-host pypi.python.org -r requirements.txt
Note - If you don’t have pip installed, follow instructions here.
4. Install and configure Redis database if you don’t have it already installed
For Ubuntu, follow these instructions; detailed instructions are here.
$sudo apt update
$sudo apt install redis-server
$sudo nano /etc/redis/redis.conf # ( change “supervised no” to “supervised systemd” )
$sudo systemctl restart redis.service
5. Run application
$python app.py
Output:
* Serving Flask app "app" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: off
* Running on http://0.0.0.0:9000/ (Press CTRL+C to quit)
As you can see, server started listening on port 9000 for requests.
6. Test that application working as expected
You can open the browser, and access http://localhost:9000.
You should see a page with Green Background along with the hostname on which the application is running and visits counter with number.

If there is error accessing Redis, you’ll get a message for visits counter
“Cannot connect to Redis, counter disabled.”
In this case, validate that the Redis server is started, application code has host as “localhost” in the “Connect to Redis” section.
So far now we know how to run the application locally and all the requirements. This knowledge is going to help us now to Dockerize this application.
In Part II of this article, we'll package this application as a Docker image and spin up a container to bring up the application and reveal the power of Docker.
Opinions expressed by DZone contributors are their own.
Trending
-
Implementing a Serverless DevOps Pipeline With AWS Lambda and CodePipeline
-
How To Use Pandas and Matplotlib To Perform EDA In Python
-
What Is JHipster?
-
4 Expert Tips for High Availability and Disaster Recovery of Your Cloud Deployment
Comments