The Evolution of Linux Containers and Their Future
A history of containerization technology starting in 1979, and what the future holds for Docker and similar technologies.
Join the DZone community and get the full member experience.Join For Free
linux containers are an operating system level virtualization technology for providing multiple isolated linux environments on a single linux host. unlike virtual machines (vms), containers do not run dedicated guest operating systems. rather, they share the host operating system kernel and make use of the guest operating system system libraries for providing the required os capabilities. since there is no dedicated operating system, containers start much faster than vms.
image credit: docker inc.
containers make use of linux kernel features such as namespaces, apparmor, selinux profiles, chroot, and cgroups for providing an isolated environment similar to vms. linux security modules guarantee that access to the host machine and the kernel from the containers is properly managed to avoid any intrusion activities. in addition containers can run different linux distributions from its host operating system if both operating systems can run on the same cpu architecture.
in general, containers provide a means of creating container images based on various linux distributions, an api for managing the lifecycle of the containers, client tools for interacting with the api, features to take snapshots, migrating container instances from one container host to another, etc.
below is a short summary of container history extracted from wikipedia and other sources :
1979 — chroot
the concept of containers was started way back in 1979 with unix chroot . it’s an unix operating-system system call for changing the root directory of a process and it's children to a new location in the filesystem which is only visible to a given process. the idea of this feature is to provide an isolated disk space for each process. later in 1982 this was added to bsd.
2000 — freebsd jails
freebsd jails is one of the early container technologies introduced by derrick t. woolworth at r&d associates for freebsd in year 2000. it is an operating-system system call similar to chroot, but included additional process sandboxing features for isolating the filesystem, users, networking, etc. as a result it could provide means of assigning an ip address for each jail, custom software installations and configurations, etc.
2001 — linux vserver
linux vserver is a another jail mechanism that can be used to securely partition resources on a computer system (file system, cpu time, network addresses and memory). each partition is called a security context, and the virtualized system within it is called a virtual private server.
2004 — solaris containers
solaris containers were introduced for x86 and sparc systems, first released publicly in february 2004 in build 51 beta of solaris 10, and subsequently in the first full release of solaris 10, 2005. a solaris container is a combination of system resource controls and the boundary separation provided by zones. zones act as completely isolated virtual servers within a single operating system instance.
2005 — openvz
openvz is similar to solaris containers and makes use of a patched linux kernel for providing virtualization, isolation, resource management, and checkpointing. each openvz container would have an isolated file system, users and user groups, a process tree, network, devices, and ipc objects.
2006 — process containers
process containers was implemented at google in year 2006 for limiting, accounting, and isolating resource usage (cpu, memory, disk i/o, network, etc.) of a collection of processes. later on it was renamed to control groups to avoid the confusion multiple meanings of the term “container” in the linux kernel context and merged to the linux kernel 2.6.24. this shows how early google was involved in container technology and how they have contributed back.
2007 — control groups
as explained above, control groups aka cgroups was implemented by google and added to the linux kernel in 2007.
2008 — lxc
lxc stands for linux containers and it is the first, most complete implementation of linux container manager. it was implemented using cgroups and linux namespaces. lxc was delivered in liblxc library and provided language bindings for the api in python3, python2, lua, go, ruby, and haskell. contrast to other container technologies lxc works on vanila linux kernel without requiring any patches. today lxc project is sponsored by canonical ltd. and hosted here.
2011 — warden
warden was implemented by cloudfoundry in year 2011 by using lxc at the initial stage and later on replaced with their own implementation. unlike lxc, warden is not tightly coupled to linux. rather, it can work on any operating system that can provide ways of isolating environments. it runs as a daemon and provides an api for managing the containers. refer to warden documentation and this blog post for more detailed information on warden.
2013 — lmctfy
lmctfy stands for “let me contain that for you”. it is the open source version of google’s container stack, which provides linux application containers. google started this project with the intention of providing guaranteed performance, high resource utilization, shared resources, over-commitment, and near zero overhead with containers (ref: lmctfy presentation ). the cadvisor tool used by kubernetes today was started as a result of lmctfy project. the initial release of lmctfy was made in oct 2013 and in year 2015 google has decided to contribute core lmctfy concepts and abstractions to libcontainer. as a result now no active development is done in lmctfy.
2013 — docker
docker is the most popular and widely used container management system as of january 2016. it was developed as an internal project at a platform-as-a-service company called dotcloud and later renamed to docker. similar to warden, docker also used lxc at the initial stages and later replaced lxc with it’s own library called libcontainer. unlike any other container platform, docker introduced an entire ecosystem for managing containers. this includes a highly efficient, layered container image model, a global and local container registries, a clean rest api, a cli, etc. at a later stage, docker also took an initiative to implement a container cluster management solution called docker swarm.
2014 — rocket
rocket is a much similar initiative to docker started by coreos for fixing some of the drawbacks they found in docker. coreos has mentioned that their aim is to provide more rigorous security and production requirements than docker. more importantly, it is implemented on app container specifications to be a more open standard. in addition to rocket, coreos also develops several other container related products used by docker and kubernetes: coreos operating system , etcd , and flannel .
2016 — windows containers
microsoft also took an initiative to add container support to the microsoft windows server operating system in 2015 for windows based applications, called windows containers . this is to be released with microsoft windows server 2016. with this implementation docker would be able to run docker containers on windows natively without having to run a virtual machine to run docker (earlier docker ran on windows using a linux vm).
the future of containers
as of today (jan 2016) there is a significant trend in the industry to move towards containers from vms for deploying software applications. the main reasons for this are the flexibility and low cost that containers provide compared to vms. google has used container technology for many years with borg and omega container cluster management platforms for running google applications at scale. more importantly, google has contributed to container space by implementing cgroups and participating in libcontainer projects. google may have gained a huge gain in performance, resource utilization, and overall efficiency using containers during past years. very recently microsoft, who did not had an operating system level virtualization on the windows platform took immediate action to implement native support for containers on windows server.
docker, rocket, and other container platforms cannot run on a single host in a production environment, the reason is that they are exposed to single point of failure. while a collection of containers are run on a single host, if the host fails, all the containers that run on that host will also fail. to avoid this, a container host cluster needs to be used. one of the first most open source container cluster management platforms to solve this problem was apache mesos . it was initially developed at university of california, berkeley as a research project and later moved to apache in around year 2012. google took a similar step to implement a cutting edge, open source container cluster management system called kubernetes in year 2014 with the experience they got from borg. docker also started a solution called docker swarm in year 2015. today these solutions are at their very early stages and it may take several months and may be another year to complete their full feature set, become stable and widely used in the industry in production environments.
microservices are another groundbreaking technology rather a software architecture which uses containers for their deployment. a microservice is nothing new but a lightweight implementation of a web service which can start extremely fast compared to a standard web service. this is done by packaging a unit of functionality (may be a single service/api method) in one service and embedding it into a lightweight web server binary.
by considering the above facts we can predict that in next few years, containers may take over virtual machines, and sometimes might replace them completely. last year i worked with a handful of enterprises on implementing container-based solutions on a poc level. there were few who wanted to take the challenge and put them in production. this may change very quickly as the container cluster management systems get more mature.
Published at DZone with permission of Imesh Gunaratne. See the original article here.
Opinions expressed by DZone contributors are their own.