Right Strategies for Microservices Deployment
What's your strategy?
Join the DZone community and get the full member experience.
Join For FreeMicroservices architecture has become very popular in the last few years as it provides high-level software scalability. Although organizations embrace this architecture pattern, many still struggle with creating a strategy that can overcome some of the major challenges such as decomposing it to the microservices-based application.
You may also like: Deploying Your Microservices
At "Parkar Consulting & Labs", we help our clients deploy the microservices application to reduce operational costs and have high availability of the services. One such success story is of the largest telecom company in the US, where we successfully did a RESTful microservices-based deployment.
In this article, we will share some of the most popular microservices deployment strategies and look at how organizations can leverage it to attain higher agility, efficiency, flexibility, and scalability.
Microservices Deployment Challenges
Deployment of monolithic applications implies that you run several identical copies of a single, usually large application. This is mostly done by provisioning N servers, be it physical or virtual, and running the application’s M instances on each one. While this looks pretty straightforward, more often than not, it isn’t. However, it is far easier than deploying microservices applications.
If you are planning to deploy a microservices application, then you must be familiar with a variety of frameworks and languages these services are written in. This is also one of the biggest challenges since each one of these services has its specific deployment, resource requirements, scaling, and monitoring requirements. Add to it, deploying services has to be quick, reliable, and cost-effective!
The good news is that several microservices deployment patterns can be easily scaled to handle a huge volume of requests from various integrated components. Read this blog to find out which one suits your organization the best and make the deployme\
Microservices Deployment Strategies
1. Multiple Service Instances per Host (Physical or VM)
Perhaps the most traditional approach to deploying an application is the Multiple Service Instances per Host pattern. In this pattern, software developers’ provision single or multiple physical or virtual hosts and run several service instances on each one. This pattern has few variants, including a variant for each service instance to be a process or run several service instances in the same process.
Benefits
Relatively efficient resource usage since multiple service instances uses the same server and its operating system.
Deployment of a service instance is also relatively fast since you just have to copy the service to a host and run it.
For instance, if the service is written in Java then you just have to copy the JAR or WAR file or the source code if it is written in Node.js or Ruby.
Starting service on this pattern is also quick since there is no overhead. In case the service has its process, you can just start it else you can also dynamically deploy into the container or restart it if the service is one of many instances running in the same container process or process group.
Challenges
- Little or complete lack of control on service instances unless each instance is a separate process. There is no way you can limit the resources each instance utilizes. This can significantly consume the memory of the host.
- Lack of isolation if several service instances run in the same process. This often results in one misbehaving service interrupting other services in the same process.
- Higher risks of errors while deployment since the operations team that deploys it needs to know the minutest of details of the services. Therefore, information exchange between the development team and the operations is a must for removing all the complexity.
2. Service Instance Per Host (Physical or VM)
Service Instance per Host Pattern is another way to deploy microservices. This allows you to run each instance separately on its host. This has two specializations: Service Instance per Virtual Machine and Service Instance per Container.
Service Instance per Virtual Machine Pattern allows you to package each service as a virtual machine (VM) images like Amazon EC2 AMI. Each instance is a VM that is run using that VM image. One of the popular apps using this pattern is Netflix for its video streaming service. To build your own VMs, you can configure a continuous integration server like Jenkins or use packer.io.
Benefits
One of the biggest benefits of using Service Instance per Virtual Machine pattern is that it uses limited memory and cannot steal resources from different services since it runs in isolation.
It allows you to leverage mature cloud infrastructure such as AWS to take advantage of load balancing and auto-scaling.
It seals your service’s implementation technology since the service becomes a black box once it has been packaged as a VM. It makes deployment a lot simpler and reliable.
Challenges
- Since VMs usually come in fixed sizes in a typical public IaaS, it is possible that it is not completely utilized. Less efficient resource utilization also ultimately leads to a higher cost of deployment since IaaS providers generally charge for VMs irrespective of whether they are idle or busy.
- The deployment of the latest version is generally slow. This is because VM images are slow to create and instantiate due to their size. This drawback can often be overcome by using lightweight VMs.
- Unless you don’t use tools to build and manage the VMs, Service Instance per Virtual Machine pattern can often be time-consuming for you and your team. This is usually a tedious process, but the good news is that the issue can be resolved by using various solutions such as Box fuse.
3. Service Instance per Container
In this pattern, each service instance operates in its respective container, which is a virtualization mechanism at the operating system level. Some of the popular container technologies are Docker and Solaris Zones.
For using this pattern, you need to package your service as a filesystem image comprising the applications and libraries needed to execute the service, popularly known as a container image. Once the service is packaged as a container image, you then need to launch one or more containers and can run several containers on a physical or virtual host. To manage multiple containers many developers like to use cluster managers such as Kubernetes or Marathon.
Benefits
Like Service Instance per Virtual Machine, this pattern also works in isolation. It allows you to track how many resources are being used by each container. One of the biggest advantages over VMs is that containers are lightweight and very fast to build. Since there is no OS boot mechanism, containers can start quickly.
Challenges
Despite rapidly maturing infrastructure, Service Instance per Container Pattern is still behind the VMs infrastructure and is not as secure as VMs since they share the kernel of the host OS.
Like VMs, you are responsible for all the heavy lifting of administering the container images. You also have to administer the container infrastructure and probably the VM infrastructure if you do not have a hosted container solution such as Amazon EC2 Container Service (ECS).
Also, since most of the containers are deployed on an infrastructure that is priced per VM, it results in extra deployment cost and over-provisioning of VMs to cater to an unexpected spike in the load.
4. Server-less Deployment
Server-less deployment technology is another strategy for micro-services deployment and it supports Java, Node.js, and Python services. AWS Lambda is a popular technology used by developers around the world. In this pattern, you need to package the service as a ZIP file and upload it to the Lambda function, which is a stateless service.
You can also provide metadata which has the name of the function to be invoked when handling a request. The Lambda function automatically runs sufficient micro-services instances to handle requests. You are simply billed for each request based on the time taken and the memory consumed.
Benefits
The biggest advantage of server-less deployment is the pricing since you will only be charged based on the work your server performs.
It frees you from any aspect of the IT infrastructure such as VMs, containers, etc., giving you more time to focus on the development of the application.
Challenges
The biggest challenge of server-less deployment is that it cannot be used for long-running services. All requests have to be completed within 300 seconds.
Also, your services have to be stateless since the Lambda function might run a different instance for each request.
Services need to be in one of the supported languages and must launch quickly else it might time out and terminate.
Closing Thoughts
Deploying a micro-services application can be quite overwhelming without the right strategy. Since these services are written in a variety of frameworks and languages, each has its deployment, scaling and administering requirements. Therefore, knowing which pattern will suit your organization the best is necessary. We, at "Parkar Consulting & Labs", have worked with scores of trusted customers to migrate their legacy monolithic applications to server-less architecture using Platform as a Service.
The Parkar platform orchestrates the deployment and end-to-end management of the micro-services.
Further Reading
Registering Multiple Local Microservices Instances With Netflix Eureka
Automating Serverless Framework Deployments Using Bitbucket Pipelines
Published at DZone with permission of Rahul Singh. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments