What is Docker?
You've heard the term and been surrounded by the hype, but if you're just looking for a simple explanation, you've come to the right article.
Join the DZone community and get the full member experience.Join For Free
Docker is not a new term to most of us; it's everywhere. But what exactly is Docker?
Quite simply, Docker is a software containerization platform, meaning you can build your application, package it along with their dependencies into a container, and then these containers can be easily shipped to run on other machines.
Okay, but what is containerization?
Containerization, also called container-based virtualization and application containerization, is an OS-level virtualization method for deploying and running distributed applications without launching an entire VM for each application. Instead, multiple isolated systems, called containers, are run on a single control host and access a single kernel.
A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings.
So the main aim is to package the software into standardized units for development, shipment, and deployment.
For example, suppose there's a Linux application which is written in Scala and R. So, in order to avoid any version conflicts for Linux, Scala, and R, Docker will just wrap this application in a container with all the versions and dependencies and deploy it on any OS or server without any version-hassle.
Now, all we need to do is to run this container without worrying about the dependent software and libraries.
So, the process is really simple. Each application will run on a separate container and will have its own set of libraries and dependencies. This also ensures that there is process level isolation, meaning each application is independent of other applications, giving developers assurance that they can build applications that will not interfere with one another.
Containers vs. Virtual Machines
Containers are an abstraction at the application layer that packages code and dependencies together. Multiple containers can run on the same machine and share the OS kernel with other containers, each running as isolated processes in userspace. Containers take up less space than VMs (container images are typically tens of MBs in size) and start almost instantly.
As you can see in case of Containerization, there's a Host OS, then above that there'll be containers having dependencies and libraries for each of the application, which makes processing and execution very fast. There is no guest OS here and it utilizes a host's operating system, sharing relevant libraries & resources as and when needed, unlike virtual machines.
Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. The hypervisor allows multiple VMs to run on a single machine. Each VM includes a full copy of an operating system, one or more apps, necessary binaries and libraries—taking up tens of GBs. VMs can also be slow to boot.
In this case of virtualization, there is a host operating system on which there are 3 guest operating systems running which is nothing but the virtual machines. But running multiple Virtual Machines on the same host operating system leads to performance degradation as each will have its own kernel and set of libraries and dependencies. This takes up a large chunk of system resources, i.e. hard disk, processor and especially RAM.
So, that was a quick overview of Docker, containerization, and virtualization.
This article was first published on the Knoldus blog
Published at DZone with permission of Ramandeep Kaur, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.