Migration From VMs to Containers
In this article, we will analyze the specific challenges of migrating Java legacy applications that are running inside VMs to container-based platforms.
Join the DZone community and get the full member experience.Join For Free
together with growing demand in paas and devops solutions, we can notice a set of adoption barriers for the owners of legacy applications hosted inside vms or just on top of bare metal servers. the complexity of decomposition and migration processes is often very high. usually, application owners have to redesign their application architecture in order to benefit from the modern paas and caas solutions.
in this article, we will analyze the specific challenges of migrating java legacy applications that are running inside vms to container based platforms. and using the example of oracle weblogic server, we’ll show the exact steps of decomposition process and the outcome of this migration.
motivation for migration to containers
hardware virtualization was a great step forward in the hosting of java ee applications compared to the era of bare metal. it gave us the ability to isolate multiple applications from each other and utilize hardware more efficiently. however, with hypervisors, each vm requires its own full os, tcp and file system stacks, which uses significant processing power and memory of the host machine.
each vm has a fixed amount of ram, and only some hypervisors can resize vms while running with a help of memory ballooning that is not a trivial task. as a result, usually, we reserve resources in each vm for the further scaling of the application. these resources are not fully utilized and, at the same time, they cannot be shared with other applications due to the lack of proper instances isolation inside a vm.
containers take performance and resource utilization a step further by sharing the os kernel, tcp stack, file system and other system resources of the host machine while using less memory and cpu overhead.
there are two types of containers — application container and system container. usually, an application container runs as little as a single process. and a system container behaves like a full os and can run full-featured init systems like systemd, sysvinit, and openrc that allow to spawn other processes like openssh, crond, and syslogd together inside a single container. both types of containers are useful in different cases and do not waste ram on redundant management processes, generally consuming less ram than vm. however, only with system containers is the migration of legacy java ee applications possible without massive application redesign.
unlike vms, the resource limits in containers can be easily changed on the running instances without a restart. and the resources that are not consumed within the limit boundaries are automatically shared with other containers running on the same hardware node.
the resources that are not utilized on the hardware can be easily used by the existing containers while scaling or for new applications workloads. considering advanced container isolation, different types of applications can be run on the same hardware node, not influencing each other. this allows increasing resource utilization of the existing infrastructure on 3x-10x times, on average.
in addition, containers are very useful for developers who want to create, package, and test applications in an agile way to accelerate application development processes and improve the scalability of applications.
what is decomposition?
decomposition is an essential part of the migration process. it helps to split large monolithic application topology into small, logical pieces and works with them independently later.
a simple representation of the decomposition process for the migration from vm to containers is shown in the picture below.
running java legacy applications in a vm
there’s an old saying in software application development: “legacy software is fine. it’s just old software that still works.” so let’s see more precisely how it works based on the example of the oracle weblogic server.
structure of oracle weblogic server in a vm
weblogic server consists of three main kinds of instances required for running in a vm:
- administration server.
- node manager.
- managed server.
administration server is the central point from which we configure and manage all resources in the cluster. it is connected to node managers, which are responsible for adding and removing managed server instances. managed servers host web applications, ejbs, web services, and other resources.
usually, each vm hosts one node manager and several managed servers inside — as well as one administration server used for managing all instances across many vms. a more detailed description of each component can be found in the official documentation .
scaling weblogic across vms
now let’s imagine we got a traffic spike and have to scale the cluster. to handle the increased load, new managed servers will be added to the vm until we reach resource limits (e.g. ram).
but the incoming traffic is growing and the current number of managed server instances is not enough to handle the load, so we need to add a new vm to be able to further scale the application.
the classical flow of weblogic server scaling across several vms contains three steps:
- provision a new vm with a preconfigured weblogic server template.
- start a node manager inside the newly added vm and connect it to the administration server.
- add new managed servers to handle a part of the increased load.
afterward, the scaling process repeats, and we launch more managed servers inside the recently added vm until it reaches the resource limits.
disadvantages of running weblogic in vms
running oracle weblogic is a very resource-inefficient approach, there are several points where resources are wasted or unused:
- each vm requires its own full os, tcp, and file system stacks, which uses significant processing power and memory from the host machine.
- the resource allocation is not highly granular, so even if we need just one additional managed server, in some cases we’ll have to provision a full vm.
- if we run out of resources in one virtual machine, we have to restart the whole virtual machine to add extra cpu cores or just more ram.
- node manager, specifically required by vm to add or remove managed servers, consumes additional resources and creates extra complexity in configurations.
- running instances in the same vm can influence each other due to the lack of isolation and harm the performance of the whole application. for the same reason, we cannot mix and match different applications within one vm.
- vm portability is mostly limited to one vendor, so there can be a set of problems in case we want to migrate to another cloud.
- template packaging and implementing ci/cd flow with vms are slow and complex processes.
migration from vm to containers
these days, we can find several good application servers and frameworks that are designed to be run as microservices in containers, such as spring boot, wildfly swarm, payara micro, and others. however, there is a set of servers specifically designed for running in vm, like oracle weblogic server , and the task of migration to containers for such instances is more complex. that is why we’d like to pay more attention to this case in our article.
decomposition of weblogic server
first of all, we need to prepare a container image with weblogic server. it’s quite an easy task these days with help of docker containers (e.g. check the official oracle repo ).
when the docker template is ready, we provision each instance inside an isolated container: one administration server and the needed number of managed servers.
at this point, we can get rid of the node manager role, as it was designed as a vm agent to add and remove managed server instances.
after migrating to containers, managed server instances can be added/removed automatically as well as be directly attached to the administration server using a container orchestration platform and a set of wslt scripts .
as a result, we get much simpler topology of our weblogic server cluster.
now, the horizontal scaling process becomes very granular and smooth, as a container can be easily provisioned from scratch or cloned. moreover, each container can be scaled up and down on the fly with no downtime. it is much more lightweight when compared to virtual machines, so this operation takes much less time, compared with scaling vms.
advantages of running weblogic in containers
migration to containers can be a challenge, but if you know how to manage it, you can gain a set of benefits:
- reduce the usage of system resources (processing power and memory) by eliminating the need in own full os, tcp, and file system stacks for each container.
- simplify horizontal scaling by removing node manager instances from the cluster topology.
- enable automatic vertical scaling using container abilities to share its unused resources and be easily resized without a restart.
- increase infrastructure utilization by hosting different applications within one physical server as their instances are isolated inside separate containers.
- migrate across cloud vendors without lock-in using container portability.
- speed up continuous integration and delivery processes by using a wide range of devops tools specifically designed for containers
a similar approach can help to decompose other layers of the application or can be applied to other java ee application servers. and in the next articles, we’ll describe how to deal with the data after decomposition and show the whole process in a specific example.
need more details or assistance? get in touch , share your use case and experience in java legacy application decomposition and migration to containers.
Published at DZone with permission of Ruslan Synytsky. See the original article here.
Opinions expressed by DZone contributors are their own.