TechTalks With Tom Smith: Keys to Migrating Legacy Apps to Microservices
TechTalks With Tom Smith: Keys to Migrating Legacy Apps to Microservices
Plan, have the right architecture, follow a DevOps methodology with automated CI/CD, and align across IT with regards to the migration plan.
Join the DZone community and get the full member experience.Join For Free
To understand the current state of migrating legacy apps to microservices, we spoke to IT executives from 18 different companies. We asked, "What are the most important steps to migrate legacy apps?" Here’s what we learned:
You may also like: TechTalks With Tom Smith: VMworld Hybrid Cloud and Multicloud Conversations
- We see 30,000 customers migrating high-performing applications early and not so great later on. The type of applications moving first is enterprise. You don’t want to write applications, want to write business logic. Driving up monoliths. It's faster to innovate break down common apps and have them be loosely coupled. Maybe you should be doing micro-integrations microservices. Re-integratable and remixable. If you don’t have a plan, you’ll be in trouble. Internal evangelism is important. Listen to the people who struggle to maintain applications. Hear their pain and then help them with it.
- It is not necessary to migrate everything. Some things might not make sense to migrate at all like databases. It may be fine to run those as pets — something that is heavyweight and dedicates a number of machines and staff with people to maintain. You can migrate a large number of apps to microservices and still use an old-style database. For those applications that you’ve identified as good targets for migration, expect a long migration process. The process is long for a reason. Legacy apps don’t tend to port well in a single step. Microservices are fundamentally different in how you design, organize, and the patterns you follow.
- To understand how legacy apps are structured and configured and what the various components are, you will have to decompose the exiting app into multiple pieces. That will make it easier to migrate. It helps to understand the service tiers an application is composed of. It’s likely a legacy application was cobbled together and split into service tiers to make them better suited for microservices. Understand the tiers and then separate them and take each tier and containerize into independent microservices. Doing so you can take advantage of platforms that scale and run microservices. By separating the service tiers you can treat them individually, scale differently, and make the application more robust.
- 1) Understand why you are migrating. Is it for easier integration, maintenance, etc?
2) Know what functionality is most important to expose first and what needs migration vs. just access.
3) Be able to build the project in stages with actual results as you go so the business doesn’t grind to a halt during the process. We need to show the value as soon as possible. This also is useful because then the end-users get the benefits much more quickly.
4) Having an intermediary step of allowing access to the legacy system so other teams can build their applications that work with it while the migration takes place.
- There are three key steps towards moving to a microservices architecture:
1) The first step is to understand the reasons for migration – potential reasons could be that updates to the apps are difficult due to cross-dependencies, scaling of individual parts of the application is difficult, data privacy/security is not managed well, etc. A well-articulated business case with these reasons is critical to sustaining any migration initiatives.
2) The next step is to select the platform and architecture. Microservices can be complex to develop and operate, so it is important to develop an Enterprise Microservices strategy. A “Vertical Slice” project will establish the viability of migration – it will also throw up potential issues early enough to develop mitigation processes. Another factor to use in this step is to look at the competency of the team – it is an easier sell if the programing language/platform is the same as what the team is comfortable with!
3) The third step is to determine how the legacy app will be migrated. The current best practice is to decompose the legacy app into logical components and then redesign component-by-component using Design Patterns such as Strangler etc. This allows the application to be slowly migrated instead of a big-bang approach.
- Most customers still consume software on-prem so doesn’t make sense to architect it as a microservice. Preparing for the future will offer our product as a managed services so we’ve done our homework. We did it running a beta of where we wanted to end up for a future state of a managed service offering. A lot of learning that this is not a trivial transition. Customers were not as demanding with a high cost of continuity if you have a service interruption, but it was still challenging to figure out what to break up and how and where it resides. Each team can have its own culture. Own their own processes and tools. How to organize and deal with the tool preferences of different teams. You end up creating little silos of microservices if you're not careful. Take the time upfront to declare what the standards are.
- The most important step is to properly design your upgraded architecture to take advantage of microservices. This often requires a shift in mindset from building legacy apps. Though many of the design goals such as modularity and reuse are common, there are differences that must be emphasized. For example, it aims for the most limited scope for each microservice, which helps to simplify strategies around performance, scalability, fault tolerance, and maintainability.
- If you model your architecture in the way you model a monolithic application, even with modularity and reusability in mind, you can still end up with bloated apps that are harder to maintain. Another step is to identify the right messaging layer between microservices. APIs and REST interfaces might work for some microservices architecture, but the more dynamic systems will use streaming technologies and even in-memory platforms to handle inter-service communications. Technologies like Hazelcast are especially interesting since you get in-memory speeds to get extreme performance for both processing and inter-service communication. A third step is to plan for next-generation capabilities. While microservices have historically been stateless, the need to capture state is becoming increasingly important for building large-scale systems. Again, in-memory and stream processing technologies work well here, especially in environments that require high throughput and low latency.
- We are changing architectures. This is the result of a more complex software demand. Speed to market, geographic expansion, platform expansion. Drive more business or improve team productivity. Is the change you are going to make will it do one or the other. If not, don’t do it. Great tech for specific use cases. Transition to microservices is a journey. If you have trouble maintaining a monolith, don’t go to microservices. Tools help maintain, manage and scale microservices. Fix whatever problem you have today before starting the transition. What is the biggest pain point? Tuning and optimizing your architecture to achieve the business requirements. Need buy-in from leadership. Reduce information asymmetry.
- Adopting microservices may require significant cultural and architectural shifts. A few important application management, agile and DevOps competencies for application delivery that should be adopted by development, deployment, and application teams looking to make the transition to microservices include:
1) Security embedded in DevOps processes.2) Continuous Integration & Continuous Delivery.3) Automation of core infrastructure and releases.4)Deploying a Kubernetes application management model.
- First – automate. Continuous integration pipelines for testing and building reproducible artifacts are absolutely crucial to any considerable refactoring effort. Standardize on protocols. Pick a standard way for services to communicate over well-defined contracts. We leaned on type safety and chose Protocol Buffers to represent message payloads.
- We use message passing to represent state changes to limit the extent services even have to know of one another. We use gRPC for synchronous communication, sparingly, when it makes sense. Choose what area is strategic for you to focus on and start carving out well-defined bounded domains. These will most likely correlate with your microservices. Design the data flow for these microservices. Consider establishing a bridge component to encapsulate interaction between your new mode communication over contracts and the legacy system.
- 1) Get all of IT plus the business leaders on the same page on the purpose of the migration. Is it for rapid release, modernizing your application or scale?
2) Ensure all the relevant staff in the development, infrastructure, operations, and business leaders have a clear understanding of the baseline metrics that allow you to understand how each component of the application is performing so that you can easily see the resulting increases or decreases of how the end-user experience is impacted. Seeing increases validate your move, while decreases allow you to quickly roll back, reducing the impact. All this time real-time pre- and post-production monitoring is even more critical.
3) While refactoring the application, the developers need to prioritize what services would like to migrate first; it’s always best to start slowly with something that is decoupled from the main legacy application so you can migrate with minimal impact.
4) Continuously enable all the application owners to migrate their application services to a microservice with an overarching architecture designed for the desired end state, while continuously monitoring the moves so that the application owners can compare before and after, enabling a quick validation and ability to roll back if needed.
5) Once complete, initiate CIO’s plans for a full continuous release environment to automatically update improvements in features and performance.
6) Understand fully the costs to migrate and maintain the application. Everyone needs to understand what will the costs to perform the migration but also future maintenance (consumption mode).
- It's difficult to implement new functionality as microservices. Slowly but steadily take down the monolith. New ones have external APIs. When you have a sufficient scale in microservices you think about changing the whole. Pursue the "strangler approach" — extend the monolith using microservices. Blue/green, canary. Industrial-scale of services. As the number of services grows, a tipping point occurs. As enterprises gain experience with microservices, people learn how to organize and break down.
- There are four options for migrating:
1) Some customers say it’s too hard to move to microservices so they stay with a monolithic application and will get left behind.
2) The next group says we’re going to go and hire a new team to do greenfield applications but that doesn’t solve the problem because you’re still supporting the monolithic application and it’s not going anywhere and the old team doesn’t want the new team to succeed.
3) We are capable and we are going to take every bit and byte from monolithic application to microservices application — good luck, it’s very hard to do, too much spaghetti code, almost impossible, will take at least a year, customers get no new features while you’re doing it.
4) Write everything new as a microservice, this started with Lyft. Glue the environments together. Every new feature is developed as a microservices. Use a proxy to abstract your network. Have a hybrid application – two architectures. When you have time, start migrating monolithic on your schedule. A gradual way to do it is the right way to do it. We build GLOO for that use case. Grow everything together to help customers migrate to microservices.
- Microservices architecture can bring the benefits of faster development and deployment times, and it can provide better hardware utilization. It is important to;
1) Understand which applications are actively developed and can benefit from the new microservices model.
2) Review the application architecture to understand which sub-components could be implemented as microservices. In many cases, the migration process requires changes in the software architecture and reimplementation of its modules, so the application domain model must be clear before starting the rearchitecting.
3) Understand (potential) external dependencies your application has. If you have external dependencies touching several parts of the legacy application, your team might not be able to implement the microservice architecture, or if implemented it wouldn’t bring any benefits because the external dependencies will dictate the data models and speed of changes.
4) Define how your microservices application is operated. Monolithic legacy applications might be just one service, running on one server. Microservices architecture can have tens or hundreds of microservices and you must use proper tooling to operate it. This means that you must either build your own operations tooling, or you could use container technologies with container orchestration frameworks, such as K8s.
- Keep things in a single service as long as you can and then scale out as necessary.
- Microservices are about speed. There are two kinds of users:
1) One is already working in the cloud.
2) People are trying to move to the cloud and require more education. You need to involve more people to sign off on microservices. We help educate people on how containers help with agility. In the formative years of Facebook in 2007, we had a traditional cache and database and transformed them into smaller groups independent of the others. Application design tools and frameworks were adapted to write code independently. Tupperware was similar to K8s today. Then the geographic distribution of data to move closer to consumers for speed and GDPR. The scaling of the database was hidden away.
Here’s who shared their insights:
- Gregg Ostrowski, Regional CTO, AppDynamics
- Jaime Ryan, Head of Strategy, Layer7 Security and Integration, Broadcom Enterprise Software
- Sanjay Challa, Director of Product Management, Datical
- Dale Kim, Senior Director of Product Marketing, Hazelcast
- Marco Palladino, CTO, Kong
- Karthik Krishnaswamy, Director of Product Marketing, F5
- Joe Leslie, Senior Product Manager, NuoDB
- Zeev Avidan, Chief Product Officer, OpenLegacy
- Matt Yonkovit, Chief Experience Officer, Percona
- Bich Le, Chief Architect, Platform9
- Sridhar Jayaraman, VP of Engineering, Qentelli
- Anurag Goel, CEO, Render
- Patrick Hubbard, Technical Product Marketing Director, SolarWinds
- Idit Levine, Founder and CEO, Solo.io
- Markku Rossi, CTO, SSH.com
- Nick Piette, Director Product Marketing API Services, Talend
- Ophir Radnitz, Software Architect, Tufin
- Karthik Ranganathan, Co-founder and CTO,
Opinions expressed by DZone contributors are their own.