DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workkloads.

Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Zero to Hero on Kubernetes With Devtron
  • 7 Microservices Best Practices for Developers
  • Popular Tools Supporting YAML Data Format
  • Are You Tracking Kubernetes Applications Effectively?

Trending

  • Evolution of Cloud Services for MCP/A2A Protocols in AI Agents
  • AI, ML, and Data Science: Shaping the Future of Automation
  • Recurrent Workflows With Cloud Native Dapr Jobs
  • Docker Model Runner: Streamlining AI Deployment for Developers
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. DevOps and CI/CD
  4. Microservices and Docker at Scale

Microservices and Docker at Scale

Microservies and Docker have become the peanut butter and jelly of modern app delivery. They allow organizations to work in a consistent, isolated runtime environment.

By 
Anders Wallgren user avatar
Anders Wallgren
·
Feb. 06, 17 · Opinion
Likes (23)
Comment
Save
Tweet
Share
26.6K Views

Join the DZone community and get the full member experience.

Join For Free

This article is featured in the new DZone Guide to DevOps: Continuous Delivery and Automation, Volume IV. Get your free copy for more insightful articles, industry statistics, and more!

Microservices and containers have recently garnered a lot of attention in the DevOps community.
Docker has matured, and is expanding from being predominantly used in the Build/Test stages to Production deployments. Similarly, microservices are expanding from being used mostly for green field web services to being used in the enterprise — as organizations explore ways to decompose their monolith applications to support faster release cycles.

As organizations strive to scale their application development and releases to achieve Continuous Delivery, microservices and containers, although challenging, are increasingly considered. While both offer benefits, they are not “one size fits all,” and we see organizations still experimenting with these technologies and design patterns for their specific use cases and environments.

Why Microservices? Why Containers?

Microservices are an attractive DevOps pattern because of their enablement of speed to market. With each microservice being developed, deployed, and run independently (often using different languages, technology stacks, and tools), microservices allow organizations to “divide and conquer,” and scale teams and applications more efficiently. When the pipeline is not locked into a monolithic configuration — of either toolset, component dependencies, release processes, or infrastructure — there is a unique ability to better scale development and operations. It also helps organizations easily determine what services don’t need scaling in order to optimize resource utilization.

Containers offer a well defined, isolated runtime environment. Instead of shipping an artifact and all of its variables, containers support packaging everything into a Docker-type file that is promoted through the pipeline as a single container in a consistent environment. In addition to isolation and consistent environment, containers also have very low overhead of running a container process. This support for environment consistency from development to production, alongside extremely fast provisioning, spin-up, and scaling, accelerate and simplify both development and operations.

Why Run Microservices in Containers?

Running microservices-based applications in a containerized environment makes a lot of sense. Docker and microservices are natural companions, forming the foundation for modern application delivery.

At a high level, microservices and Docker together are the PB&J of DevOps because:

  • They are both aimed at doing one thing well, and those things are complimentary.
  • What you need to learn to be good at one translates well to the other.

More specifically:

Purpose             

A microservice is (generally) a single process focused on one aspect of the application, operating in isolation as much as possible.

A Docker container runs a single process in a well-defined environment.

Complexity

With microservices, you now need to deploy, coordinate, and run multiple services (dozens to hundreds), whereas before you might have had a more traditional three-tier/monolithic architecture. While microservices support agility — particularly on the development side — they come with many technical challenges, mainly on the operations side.

Containers help with this complexity because they make it easy and fast to deploy services in containers, mainly for developers.

Scaling

Microservices make it easier to scale because each service can scale independently of other services.

Container-native cluster orchestration tools, such as Kubernetes, and cloud environments, such as Amazon ECS and Google Container Engine (GKE), provide mechanisms for easily scaling containers based on load and business rules.

System Comprehension

Both microservices and containers essentially force you into better system comprehension — you can’t be successful with these technologies if you don’t have a thorough understanding of your architecture, topology, functionality, operations, and performance.

Challenges

Managing microservices and large-scale Docker deployments pose unique challenges for enterprise IT. Because there is so much overlap in terms of what an organization has to be proficient at in order to successfully deploy and modify microservices and containers, there is quite a bit overlap in terms of challenges and best practices for operationalizing containers and microservices at scale.

  • There are increased pipeline variations. Orchestrating the delivery pipeline becomes more complex, with more moving parts. When you split a monolith into several microservices, the number of pipelines might jump from one to 50 (or however many microservices you have set up). You need to consider how many different teams you will need and whether you have enough people to support each microservice/pipeline.
  • Testing becomes more complex. There is a larger amount of testing that needs to be taken into consideration — integration testing, API contract testing, static analysis, and more.
  • Deployment complexity increases. While scaling the containerized app is fairly easy, there’s a lot of activity that needs to happen first. It must be deployed for development and testing many times throughout the pipeline, before being released to production. With so many different services developed independently, the number of deployments increases dramatically.
  • Monitoring, logging, and remediation become very important and increasingly difficult because there are more moving parts and different distributed services that comprise the entire user experience and application performance.
  • There are numerous different toolchains, architectures, and environments to manage.
  •  There is a need to take into consideration existing legacy applications and how these are coordinated with the new services and functionality of container- or microservices-based applications.
  • Governance and auditing (particularly at the enterprise level) become more complicated with such a large distributed environment, and with organizations having to support both containers and microservices, alongside traditional releases and monolithic applications.

In addition to these common challenges, microservices and containers each pose their own unique challenges. If you’re considering microservices, know that:

  • Distributed systems are dif cult and mandate strong system comprehension.

  • Service composition is tricky and can be expensive to change. Start as a monolith, and avoid premature decomposition, until you understand your application’s behavior thoroughly. • Inter-process failure modes need to be accounted for and although abstractions look good on paper they are prone to bottlenecks.

  • Pay attention to transaction boundaries and foreign-key relationship as they’ll make it harder to decompose.

  • Consider event-based techniques to decrease coupling further.

  • For API and services’ SLA: “Be conservative in what you do, be liberal in what you accept from others”

  • State management is hard-transactions, caching, and other fun things...

  • Testing (particularly integration testing between services) and monitoring (because of the increased number of services) become way more complex.

  • Service virtualization, service discovery, and proper design of API integration points and backwards-compatibility are a must.

  • Troubleshooting failures: “every outage is a murder mystery.”

  • Even if a service is small, the deployment footprint must be taken into account.

  • You rely on the network for everything — you need to consider bandwidth, latency, reliability.

  • What do you do with legacy apps: Rewrite? Ignore? Hybrid?

Container Challenges

Security is a critical challenge–both because it is still a relatively new technology, and due to the security concerns for downloading an image le. Containers are black boxes to OpSec: less control, less visibility inside the container, and existing tools may not be container-savvy. Be sure to sign and scan images, validate libraries, etc.; harden the container environment as well; drop privileges early, and use fine-grained access control so it’s not all root. Be smart about credentials (container services can help).

Monitoring is tricky, since container instances may be dropped or span-up continuously. Logging and monitoring need to be configured to decommission expired containers or save the log and data-business data, reference data, compliance data, logs, diagnostics-from other (temporal) instances.

Know what’s running where, and why, and avoid image bloat and container sprawl.

Since the containers hosting and cluster orchestration market is still emerging, we see users experimenting a lot with running containers across multiple environments, or using different cluster orchestration tools and APIs. These early adopters need to manage containers while minimizing the risk of lock-in to a specific cloud vendor or point-tool, or having to invest a lot of work (and steep learning curve) in rewriting complex scripting in order to repurpose their deployment or release processes to t a new container environment or tool.

Best Practices for Microservices and Containers

While, admittedly, there are a fair number of challenges when it comes to deploying microservices and containers, the end-result, will be reduced overhead costs and faster time to market. If microservices and containers make the most sense for your application use case, there is a great deal of planning that needs to happen before you decompose your application to a set of hundreds of different services, or migrate your data center to a container environment. Without careful planning and following industry best practices, it can be easy to lose the advantages of microservices and containers.

To successfully run microservices and containers at scale, there are certain skill sets that the organization must possess throughout the software delivery cycle:

  •  Build domain knowledge. Before deploying microservices it is critically important to understand the domain before making difficult decisions about where to partition the problem into different services. Stay monolithic for a while. Keep it modular and write good code.
  • Each service should have independent CI and Deployment pipelines so you can independently build, verify, and deploy each service without having to take into account the state of delivery for any other service.
  • Pipeline automation: A ticketing system is not automation. With the increase in the number of pipelines and pipeline complexity, you must be able to orchestrate your end-to-end process, including all the point-tools, environments, and configuration. You need to automate the entire process — from CI, testing, configuration, infrastructure provisioning, deployments, application release processes, and production feedback loops.
  • Test automation: Without first setting up automated testing, microservices and containers will likely become a nightmare. An automated test framework will check that everything is ready to go at the end of the pipeline and boost confidence for production teams.
  • Use an enterprise registry for containers. Know where data is going to be stored and pay attention to security by adding modular security tools into the software pipeline.
  • Know what’s running where and why. Understand the platform limitations and avoid image bloat.
  • Your pipeline must be tools/environment agnostic so you can support each workflow and tool chain, no matter what they are, and so that you can easily port your processes between services and container environments.
  • Consistent logging and monitoring across all services provide the feedback loop to your pipeline. Make sure your pipeline automation plugs into your monitoring so that alerts can trigger automatic processes such as rolling back a service, switching between blue/green deployments, scaling, and so on. Your monitoring/ performance testing tool needs allow you to track a request through the system even as it bounces between different services.
  • Be rigorous in handling failures (consider using Hystrix, for example, to bake in better resiliency).
  • Be flexible at staffing and organizational design for microservices. Consider whether there are enough people for one team per service.

There is increasing interest in microservices and containers, and for good reasons. However, businesses need to make sure they have the skills and knowledge for overcoming the challenges of managing these technologies reliably, at scale. It is critical to plan and model your software delivery strategy, and align its objectives with the right skill sets and tools, so you can achieve the faster releases and reduced overhead that microservices and containers can offer.

More DevOps Goodness

For more insights on implementing unambiguous code requirements, Continuous Delivery anti-patterns, best practices for microservices and containers, and more, get your free copy of the new DZone Guide to DevOps!

If you'd like to see other articles in this guide, be sure to check out:

  • Continuous Delivery Anti-Patterns

  • 9 Benefits of Continuous Integration

  • Better Code for Better Requirements

  • Autonomy or Authority: Why Microservices Should Be Event-Driven

  • Executive Insights on the State of DevOps

  • Branching Considered Harmful! (for Continuous Integration)

  • What the Military Taught Me About DevOps

microservice Docker (software) Kubernetes Web Service Continuous Integration/Deployment application Data (computing) Pipeline (software)

Opinions expressed by DZone contributors are their own.

Related

  • Zero to Hero on Kubernetes With Devtron
  • 7 Microservices Best Practices for Developers
  • Popular Tools Supporting YAML Data Format
  • Are You Tracking Kubernetes Applications Effectively?

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!