{{announcement.body}}
{{announcement.title}}

Migrating to Microservices — It's Easier Than You Think

DZone 's Guide to

Migrating to Microservices — It's Easier Than You Think

A guide for migrating to microservices, organized into a helpful step-by-step guide. Steps include containerizing your Java application and more!

· Microservices Zone ·
Free Resource

sign that reads "this way"

Migrating to the microservices roadmap.
You may also like: Moving to Microservices

Migrating to Microservices — A Roadmap

Migrating to microservices often sounds like a huge and complex task. While there are complexities in the process, it's actually easier than you might think. This blog creates a basic roadmap for migrating to microservices using a standard J2EE application from a monolithic architecture to a microservice architecture. We will start by first containerizing our Java application and end with auto-deploying to a Kubernetes microservice environment.

Step 1: Containerize Your Java Application and Runtime

Start your migration journey by first containerizing your Java Application (.jar, .war or .ear). This will involve building a container that includes the Java runtime as well as your application. When you do this, remember the following:

  1. The container should have one purpose (entry point).
  2. The container should be self-reliant.

Step 2: Learn NGINX

As you begin migrating to microservices, you will need a load balancer for your Kubernetes environment that handles external Ingress from your Cloud Load Balancer. NGINX is a good place to start. Take the time to install and learn NGINX native.

Step 3: Containerize NGINX

Now that you understand NGINX, containerize it. Take your NGINX setup and move it to a container.

Step 4: Build a Kubernetes Cluster

Now the fun starts. Your efforts in migrating to microservices are nothing without Kubernetes to manage your containers. Learn Kubernetes by building your first Kubernetes Cluster. Start with Google and their free credits. The wizard will help you build your cluster. You will run a local install of the Google Command Line Client. Also, learn Helm.

Step 5: Understand Domains

With your Kubernetes cluster built and waiting for your application, it's time to think about how to break apart your Application. DeployHub supports a Domain-Driven Design to organize, catalog, publish and share microservices.

It's important to define your domain structure early on in the process. Organizing your microservices and re-usable components from the beginning will prevent you from suffering later from a disorganized microservice implementation.

Step 6: Add a Binary Repo to Your CD Workflow.

Start managing your binaries in a repo for the container build step. Every Build will be stored with a version number and critical for tracking history.

Step 7: Add a Docker (Container) Repository to Your CI/CD

Start managing your binaries in a repo for the container build step. Every Build will be stored with a version number, critical for tracking history.

Step 8: Auto Deploy via CI/CD

Use DeployHub Team to Auto Deploy. DeployHub will track your configurations and version for a single source of truth.

a diagram of kubernetes

What we built.

That is about it. Don't overthink the process of migrating to microservices. Take your time and complete each step. You will need to make some minor changes to your CI/CD workflows to support new processes, but it is all very doable. From this simple example, you can begin building on your microservices strategy and launch your journey into modern architecture.

Is there a way to capture the application dependencies when migrating an app to a container for the first time?

Questions and Answers on the Topic of Migrating to Microservices

How Does Deployhub Integrate With a CD Pipeline in DevOps Azure?

The build is the best place to start. The compile and link process will show the dependencies needed, ie JDBC driver, C runtime. If you are an OpenMake Meister user, all this information is displayed in the impact analysis reports. From there you will start with your Dockerbuild file doing a "yum install" of the packages needed to satisfy your app.

A JDBC dependency indicates that you need to do a "yum install" of the proper database runtime. If your app uses SSL then "yum install OpenSSH" is needed. Again, the libraries consumed in the build will need to be resolved at execution time, therefor the build will drive your OS-level package installs.

Can You Expand What You Mean by Deployhub Is a Modern Configure and Deploy the Solution? Does This Refer to the Microservices Configuration Only?

First, DeployHub hooks into the CI part of Azure. When a build happens, DeployHub is notified of what was created and where it came from. The next level is in the application view. The Azure pipelines build images for independent microservices but it doesn't have a logical view of the application.

DeployHub brings the logical view of the application with the configuration management and application version to component version dependency mapping to Azure. When a deployment is needed, Azure triggers DeployHub to deploy the logical application version. DeployHub looks across all of the pipeline artifacts deploying only the incremental changes out to Kubernetes.

DeployHub is a 'modern' configure and deploy the solution in two ways. First, for monolithic applications that are being pushed via an agile practice, DeployHub can perform incremental updates of your monolithic with database modifications.

DeployHub starts you thinking in more of a 'component' context where individual pieces can be deployed in an incremental fashion that fits more with an agile practice that always deploying everything every time.

For example, it can support version jumping, moving production from Version 2 to Version 10 and only applying what has changed including your application binaries (easy), environment variables (a bit harder) and database schema changes (much harder). Because of its backend version control engine, it knows the differences. Think versioning code and apply it to your software configuration.

Secondly, DeployHub includes a Domain Structure that allows you to catalog and publish your reusable components like microservices. And because it already has the built-in configuration version control, it can easily manage microservices that are independently deployed. It also tracks the relationships between 'components' (microservices) and the applications that use them.

With Nginx, Is That a Cluster or Else on Prod How to Manage Single-Point-Of-Failure Issue?

The DeployHub platform supports both a monolithic and microservice architecture based on independently deployable component design.

You can run multiple replicas of NGINX in the pod to provide high availability and scale-ability for the Ingress.

How Do We Ensure Different Versions of a Deployed Application on a Container? Is It Through Different Versions of Container Images on Docker Hub?

To run RDBMS in a container you need to persist the data external to the container. This setup only allows for a single container to have read/write access to that mounted volume. Having multiple containers access the volume will corrupt the database due to uncontrolled writes to the underlying data files.

Cloud providers have a managed database solution that your containers can talk to directly without the need to run the RDBMS yourself. For on-perm, connecting to Oracle, DB2, Mysql via a network connection is the way to go.

Docker containers themselves are immutable but the tag to the container can be changed. Use the LABEL command in the Dockerfile to embed information about the git commit, git repo, CI build into the container. This information will be immutable and can be queried post docker build no matter what the TAG is set to. As for tagging, we recommend using semantic version numbering v1.5.1.0 plus the git commit that triggered the build, for example, v1.5.1.0-gd4c633fc.

This format gives you a quick way to get back the git changes and also have a "human-readable" version to reference. At deploy time the image tag is used to determine if a new image needs to be brought down. You may just rebuild base on a new commit but not change the semantic version. This version format will handle that scenario without the need to force a semantic version number change.

Would Nginx Interface With the Kubernetes API Server or Directly With the Kube Pods During the Reverse Proxy?

Dockerhub can handle multiple versions of a container you just need to manage your TAGs. Do not edit TAGs but create new ones to move from version to version.

How Do We Implement a Messaging Broker Like Kafka or Rabbit MQ as Part of a Microservices Based Application Architecture?

NGINX allows you to intercept the incoming URI and route them to a pod of containers. This routing is done using the NGINX reverse proxy. If you need to do fancier intercepting of the transaction a service mesh like Istio should be used since it can interrogate anything in the HTTP header and route based on that info.

How Do You Debug Mistakes in Changing Apis? Unexpected Partial Breakage in the Death Star Network of Dependencies.

The message broker would be installed into the cluster as a container/pod. The may be many different containers needed to run the broker. Once the broker is running you will be able to reach it based on well-known names defined to the pods in the cluster. A trick is to connect to a container via "bash" in for one of the pods and run 'set' to see the well-known names being generated. Also, look at the /etc/hosts file for additional information.

You Are Describing Containerizing a Customer-Facing Application. Do You Recommend Applying the Pattern Internally to Containerize the Build Environment Itself?

The first step is to understand the relationships of your microservices and how they make up your application version. DeployHub can help you to this by providing a map of the relationships. The map will give you your starting point and a pathway to the API having the error.

From there you would start at the beginning of the path and look at the log output to see if the data coming into and being passed along is what you expect. The API error maybe a result of "bad" data upstream from where the error is. You can trace the log messages down to a container in the Pods.

Given the data and the version of the container, you should be able to resolve the issue. Be aware of the problem may be the client-side sending bad data (no edit checks) to the back end and the back end doesn't handle it or throws an error.

What Is the Difference Between Deployhub Functions and Garden_Io?

Jenkins-X, Google CloudBuild, Tekton, CircleCI are containerized build systems. Use a containerize build system that already exists and not build one yourself. Each one has its strengths base on the type of application you are building and what your pipeline looks like.

Would You Change Something in Your Migration Flow If You Were Migrating to AWS Instead of Google Cloud?

DeployHub functions outside of the cluster which enables it to adapt to different types of Kubernetes providers that are used in the pipeline. For example, developers use minikube, the testing team uses AWS and prodictopm uses on-prem openshift. garden.io is all in a single cluster and is more similar to the Tekton Project.

How Does Deployhub Compare to AWS Codestar?

No, the cloud provider does not matter. They all run Kubernetes they just have a different set of commands you run to configure your client (kubectl) to talk to the cluster. Once the config is done, the kubectl commands will the same for all providers.

Can You Please Share a URL for a Video on an End-To-End Configuration Setup Using Deployhub?

Codestar is a CD platform that manages a pipeline. It's similar to Jenkins and CircleCI. DeployHub would be one of the tools that Codestar would interface with. DeployHub would keep track of the dependency relationship and manage the configuration of your application.

Codestar would run the pipeline calling out to DeployHub to grab information about the builds and call out to DeployHub to do deployments at the logical application view. Codestar can do deployments at the individual container level but has no knowledge of how that container relates to the others and what impact the deployment is going to have.


Further Reading

Migrating Monolithic Applications to Microservices

Monolithic to Microservices

A Transition From Monolith to Microservices

Topics:
microservices ,tutorial ,microservices adoption ,migrating to microservices ,java application ,roadmap ,nginx ,container ,container adoption ,codestar

Published at DZone with permission of

Opinions expressed by DZone contributors are their own.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}