DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Refcards Trend Reports Events Over 2 million developers have joined DZone. Join Today! Thanks for visiting DZone today,
Edit Profile Manage Email Subscriptions Moderation Admin Console How to Post to DZone Article Submission Guidelines
View Profile
Sign Out
Refcards
Trend Reports
Events
Zones
Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones AWS Cloud
by AWS Developer Relations
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Partner Zones
AWS Cloud
by AWS Developer Relations
The Latest "Software Integration: The Intersection of APIs, Microservices, and Cloud-Based Systems" Trend Report
Get the report
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Why Container-Based Deployment Is Preferred for Microservices

Why Container-Based Deployment Is Preferred for Microservices

We need a container management platform because when we need to scale a specific service, we can't manually monitor the environment and start a new instance. We need automation.

Prabath  Ariyarathna user avatar by
Prabath Ariyarathna
·
Jan. 24, 17 · Opinion
Like (20)
Save
Tweet
Share
16.27K Views

Join the DZone community and get the full member experience.

Join For Free

When it comes to microservices architecture, the deployment of microservices plays a critical role and has the following key requirements.

Ability to Deploy and Undeploy Independently of Other Microservices

Developers never need to coordinate the deployment of changes that are local to their service. These kinds of changes can be deployed as soon as they have been tested. The UI team can, for example, perform A/B testing and rapidly iterate on UI changes. The microservices architecture pattern makes continuous deployment possible.

Ability to Scale at Each Microservices Level

This is important to keep in mind because a given service may get more traffic than other services.

Monolithic applications are difficult when scaling individual portions of the application. If one service is memory-intensive and another is CPU-intensive, then the server must be provisioned with enough memory and CPU to handle the baseline load for each service. This can get expensive if each server needs a high amount of CPU and RAM, and is exacerbated if load balancing is used to scale the application horizontally. Finally, and more subtlety, the engineering team structure will often start to mirror the application architecture over time.

main.png
We can overcome this by using microservices. Any service can be individually scaled based on its resource requirements. Rather than having to run large servers with lots of CPU and RAM, microservices can be deployed on smaller hosts containing only those resources required by that service. For example, you can deploy a CPU-intensive image processing service on EC2 compute-optimized instances and deploy an in-memory database service on EC2 memory-optimized instances.

Ability to Build and Deploy Microservices Quickly

One of the key drawbacks of the monolithic application is that it is difficult to scale. As explained in the above section, it needs to mirror the whole application to scale. With microservices architecture, we can scale specific services since we are deploying services in the isolated environment. Nowadays, dynamically scaling the application is very famous every iSaaS has that capability (i.e., elastic load balancing). With that approach, we need to quickly launch the application in the isolated environment.

Following are the basic deployment patterns that we can commonly see in the industry.

multipleservice.png

Multiple service instances per host: Deploy multiple service instances on a host.

multiHost.png

Service instance per host: Deploy a single service instance on each host.

VM.png

Service instance per VM: A specialization of the service instance per host pattern where the host is a VM.

container.png

Service instance per container: A specialization of the service instance per host pattern where the host is a container.

Container or VM?

As of today, there is a significant trend in the industry to move towards containers from VMs for deploying software applications. The main reasons for this are the flexibility and low cost that containers provide compared to VMs. Google has used container technology for many years with Borg & Omega container cluster management platforms for running Google applications at scale.

More importantly, Google has contributed to container space by implementing cgroups and participating in the libcontainer project. Google may have gained a huge gain in performance, resource utilization and overall efficiency using containers during the past years. Very recently, Microsoft (who did not have an operating system-level virtualization on the Windows platform) took immediate action to implement native support for containers on Windows Server.
VM_vs_Docker.png

I found a nice comparison between the VMS and containers on the internet comparing houses and apartments. Houses (the VMs) are fully self-contained and offer protection from unwanted guests. They also each possess their own infrastructure — plumbing, heating, electrical, etc. Furthermore, in the vast majority of cases, houses are all going to have at a minimum a bedroom, living area, bathroom, and kitchen. I’ve yet to ever find a “studio house.” Even if I buy the smallest house, I may end up buying more than I need because that’s just how houses are built.

Apartments (the containers) also offer protection from unwanted guests, but they are built around shared infrastructure. The apartment building (Docker host) shares plumbing, heating, electrical, etc. Additionally, apartments are offered in all kinds of different sizes, from studios to multi-bedroom penthouses. You’re only renting exactly what you need. Finally, just like houses, apartments have a front entrance.

There are design-level differences between these two concepts. Containers share the underlying resources while providing the isolate environment and only provide the resources that are needed to run the application. VMs are different. They first start the OS and then start your application. It's providing a default set of unwanted services that consume the resources.

Before moving into the actual comparison, let's see how we can deploy a microservices instance in any environment. The environment can be single or multi-host in the single VM or it can be the multiple container in the single VM, a single container in the single VM, or a dedicated environment. It is not just starting application on the VM or deploy application in the web container. We should have an automated way to manage it. AWS provides nice VM management capability for any deployment. If we use VM for deployment, we normally build the VM with the required application component. Using this VM, we can spawn any number of different instances.

Similar to AWS VM management, we need some container management platform for the container as well because when we need to scale the specific service, we cannot manually monitor the environment and start a new instance. It should be automated. We can use Kubernetes. It is extending Docker's capabilities by allowing to manage a cluster of Linux containers as a single system, managing and running Docker containers across multiple hosts, offering co-location of containers, service discovery, and replication control.

Both VM and containers are designed to provide an isolated environment. Additionally, in both cases, that environment is represented as a binary artifact that can be moved between hosts. There may be other similarities but those are the major differences that I see.

In a VM-centered world, the unit of abstraction is a monolithic VM that stores not only application code but stateful data. A VM takes everything that used to sit on a physical server and just packs it into a single binary so it can be moved around. However, it is still the same thing. With containers, the abstraction is the application, or more accurately, a service that helps to make up the application.

When we scale up the instances, this is very useful because using VMs means we need to spawn another VM instance. It will take some time to start (i.e., OS boot time and application boot time) but with the Docker-like container deployment, we can start a new container instance within a few milliseconds. 

Another important factor is patching the existing services since we cannot develop the code without any issues. We need to patch the code. Patching the code in a microservices environment is a little bit tricky because we may have more than 100 of instances to patch. If we get the VM deployment, we need to make the new VM image by adding new patches and use it for the deployment. It is not an easy task because there can be more than 100 microservices and we need to maintain different types of VM images, but with Docker-like container-based deployment, this is not an issue. We can configure Docker images to get these patched from configured place. We can achieve similar requirements with output script in the VM environment, but Docker has that capability out of the box. Therefore, the total config and software update propagation time would be much faster with the container approach.

A heavier car may need more fuel for reaching higher speeds than a car of the same spec with less weight. Sports car manufacturers always adhere to this concept and use light weight material such as aluminum and carbon fiber for improving fuel efficiency. The same theory may apply to software systems. The heavier the software components, the higher the computation power they need. Traditional virtual machines use a dedicated operating system instance for providing an isolated environment for software applications. This operating system instance needs additional memory, disk, and processing power in addition to the computation power needed by the applications. Linux containers solved this problem by reducing the weight of the isolated unit of execution by sharing the host operating system kernel with hundreds of containers. The following diagram illustrates a sample scenario of how much resources containers would save compared to virtual machines.RESOURCE.pngWe cannot say that container-based deployment is the best for microservices for every deployment. It is based on different constraints. We need to carefully select one or both as the hybrid way based on our requirements.

microservice Docker (software) Kubernetes application Web Service Virtual Machine Host (Unix) operating system

Published at DZone with permission of Prabath Ariyarathna, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

Popular on DZone

  • mTLS Everywere
  • Introduction to Container Orchestration
  • Stop Using Spring Profiles Per Environment
  • Building Microservice in Golang

Comments

Partner Resources

X

ABOUT US

  • About DZone
  • Send feedback
  • Careers
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 600 Park Offices Drive
  • Suite 300
  • Durham, NC 27709
  • support@dzone.com
  • +1 (919) 678-0300

Let's be friends: