Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Migration From VMs to Containers

DZone's Guide to

Migration From VMs to Containers

In this article, we will analyze the specific challenges of migrating Java legacy applications that are running inside VMs to container-based platforms.

· Cloud Zone
Free Resource

MongoDB Atlas is a database as a service that makes it easy to deploy, manage, and scale MongoDB. So you can focus on innovation, not operations. Brought to you in partnership with MongoDB.

Together with growing demand in PaaS and DevOps solutions, we can notice a set of adoption barriers for the owners of legacy applications hosted inside VMs or just on top of bare metal servers. The complexity of decomposition and migration processes is often very high. Usually, application owners have to redesign their application architecture in order to benefit from the modern PaaS and CaaS solutions.

In this article, we will analyze the specific challenges of migrating Java legacy applications that are running inside VMs to container based platforms. And using the example of Oracle WebLogic Server, we’ll show the exact steps of decomposition process and the outcome of this migration.

Motivation for Migration to Containers

Hardware virtualization was a great step forward in the hosting of Java EE applications compared to the era of Bare Metal. It gave us the ability to isolate multiple applications from each other and utilize hardware more efficiently. However, with Hypervisors, each VM requires its own full OS, TCP and file system stacks, which uses significant processing power and memory of the host machine.

Each VM has a fixed amount of RAM, and only some hypervisors can resize VMs while running with a help of memory ballooning that is not a trivial task. As a result, usually, we reserve resources in each VM for the further scaling of the application. These resources are not fully utilized and, at the same time, they cannot be shared with other applications due to the lack of proper instances isolation inside a VM.virtual-machines-to-containers-jelastic

Containers take performance and resource utilization a step further by sharing the OS kernel, TCP stack, file system and other system resources of the host machine while using less memory and CPU overhead.  

There are two types of containers — application container and system container. Usually, an application container runs as little as a single process. And a system container behaves like a full OS and can run full-featured init systems like systemd, SysVinit, and openrc that allow to spawn other processes like openssh, crond, and syslogd together inside a single container. Both types of containers are useful in different cases and do not waste RAM on redundant management processes, generally consuming less RAM than VM. However, only with system containers is the migration of legacy Java EE applications possible without massive application redesign.

Unlike VMs, the resource limits in containers can be easily changed on the running instances without a restart. And the resources that are not consumed within the limit boundaries are automatically shared with other containers running on the same hardware node.vm-and-containers-scaling

The resources that are not utilized on the hardware can be easily used by the existing containers while scaling or for new applications workloads. Considering advanced container isolation, different types of applications can be run on the same hardware node, not influencing each other. This allows increasing resource utilization of the existing infrastructure on 3x-10x times, on average.

In addition, containers are very useful for developers who want to create, package, and test applications in an agile way to accelerate application development processes and improve the scalability of applications.

What Is Decomposition?

Decomposition is an essential part of the migration process. It helps to split large monolithic application topology into small, logical pieces and works with them independently later.

A simple representation of the decomposition process for the migration from VM to containers is shown in the picture below.decomposition

Running Java Legacy Applications in a VM

There’s an old saying in software application development: “Legacy software is fine. It’s just old software that still works.” So let’s see more precisely how it works based on the example of the Oracle WebLogic Server.

Structure of Oracle WebLogic Server in a VM

WebLogic Server consists of three main kinds of instances required for running in a VM:

  • Administration Server.
  • Node Manager.
  • Managed Server.

Administration Server is the central point from which we configure and manage all resources in the cluster. It is connected to Node Managers, which are responsible for adding and removing Managed Server instances. Managed Servers host web applications, EJBs, web services, and other resources.weblogic-server-structure

Usually, each VM hosts one Node Manager and several Managed Servers inside — as well as one Administration Server used for managing all instances across many VMs. A more detailed description of each component can be found in the official documentation.

Scaling WebLogic Across VMs

Now let’s imagine we got a traffic spike and have to scale the cluster. To handle the increased load, new Managed Servers will be added to the VM until we reach resource limits (e.g. RAM).weblogic-vm-structure

But the incoming traffic is growing and the current number of Managed Server instances is not enough to handle the load, so we need to add a new VM to be able to further scale the application.

The classical flow of WebLogic Server scaling across several VMs contains three steps:

  1. Provision a new VM with a preconfigured WebLogic Server template.
  2. Start a Node Manager inside the newly added VM and connect it to the Administration Server.
  3. Add new Managed Servers to handle a part of the increased load.vm-server-scaling

Afterward, the scaling process repeats, and we launch more Managed Servers inside the recently added VM until it reaches the resource limits.adding-vm

Disadvantages of Running WebLogic in VMs

Running Oracle WebLogic is a very resource-inefficient approach, there are several points where resources are wasted or unused:

  • Each VM requires its own full OS, TCP, and file system stacks, which uses significant processing power and memory from the host machine.
  • The resource allocation is not highly granular, so even if we need just one additional Managed Server, in some cases we’ll have to provision a full VM.
  • If we run out of resources in one virtual machine, we have to restart the whole virtual machine to add extra CPU cores or just more RAM.
  • Node Manager, specifically required by VM to add or remove Managed Servers, consumes additional resources and creates extra complexity in configurations.
  • Running instances in the same VM can influence each other due to the lack of isolation and harm the performance of the whole application. For the same reason, we cannot mix and match different applications within one VM.
  • VM portability is mostly limited to one vendor, so there can be a set of problems in case we want to migrate to another cloud.
  • Template packaging and implementing CI/CD flow with VMs are slow and complex processes.

Migration From VM to Containers

These days, we can find several good application servers and frameworks that are designed to be run as microservices in containers, such as Spring Boot, WildFly Swarm, Payara Micro, and others. However, there is a set of servers specifically designed for running in VM, like Oracle WebLogic Server, and the task of migration to containers for such instances is more complex. That is why we’d like to pay more attention to this case in our article.

Decomposition of WebLogic Server

First of all, we need to prepare a container image with WebLogic Server. It’s quite an easy task these days with help of Docker containers (e.g. check the official Oracle repo).

When the Docker template is ready, we provision each instance inside an isolated container: one Administration Server and the needed number of Managed Servers.

At this point, we can get rid of the Node Manager role, as it was designed as a VM agent to add and remove Managed Server instances.

After migrating to containers, Managed Server instances can be added/removed automatically as well as be directly attached to the Administration Server using a container orchestration platform and a set of WSLT scripts.

As a result, we get much simpler topology of our Weblogic Server Cluster.weblogic-server-cluster

Now, the horizontal scaling process becomes very granular and smooth, as a container can be easily provisioned from scratch or cloned. Moreover, each container can be scaled up and down on the fly with no downtime. It is much more lightweight when compared to virtual machines, so this operation takes much less time, compared with scaling VMs.

Advantages of Running WebLogic in Containers

Migration to containers can be a challenge, but if you know how to manage it, you can gain a set of benefits:

  • Reduce the usage of system resources (processing power and memory) by eliminating the need in own full OS, TCP, and file system stacks for each container.
  • Simplify horizontal scaling by removing Node Manager instances from the cluster topology.
  • Enable automatic vertical scaling using container abilities to share its unused resources and be easily resized without a restart.
  • Increase infrastructure utilization by hosting different applications within one physical server as their instances are isolated inside separate containers.
  • Migrate across cloud vendors without lock-in using container portability.
  • Speed up continuous integration and delivery processes by using a wide range of DevOps tools specifically designed for containers

A similar approach can help to decompose other layers of the application or can be applied to other Java EE application servers. And in the next articles, we’ll describe how to deal with the data after decomposition and show the whole process in a specific example.

Need more details or assistance? Get in touch, share your use case and experience in Java legacy application decomposition and migration to containers.

MongoDB Atlas is the best way to run MongoDB on AWS — highly secure by default, highly available, and fully elastic. Get started free. Brought to you in partnership with MongoDB.

Topics:
vm ,cloud ,legacy apps ,containers

Published at DZone with permission of Ruslan Synytsky. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

SEE AN EXAMPLE
Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.
Subscribe

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}