Join the DZone community and get the full member experience.
Join For Free
Learn how to migrate and modernize stateless applications and run them in a Kubernetes cluster.
To understand the current and future state of containers, we gathered insights from 33 IT executives who are actively using containers. We asked, "How has the orchestration and deployment of containers changed application development?"
Here's what they told us:
- Containers have improved the speed with which you can spin up containers, which results in better tests and greater access to tests and reporting. It's a lot faster, better, and easier. You don't have to wait for IT to spin up a VM.
- Significantly improved velocity. Containerization is a new vehicle for companies migrating to cloud-native and microservices. The combination of open source, the easy-to-use toolchain, and the flexibility of containers opens up DevOps teams to feature and product velocity, control and security. Additionally, containers — Docker specifically — have a relatively mellow adoption curve that makes them easy to implement.
- Containers facilitate cultural and technical change. We focus on COGS as a SaaS provider, and containers allow us to have a smaller physical footprint and use tooling that declaratively describes applications and infrastructure. We're able to get apps up and running faster. In addition, containers enable resource isolation to reduce MTTR.
- Containers provide greater portability and developer control. In reality, it depends on the user if they are more traditional in trying to migrate, having the deployment factor – image-based deployment faster, easier, less hectic like an automation framework. Containers enable you to load at the factory rather than at the dock. Whether you are new school or old school, there is a convenience of automating at the factory rather than the dock which is beneficial. You can get 1,000 containers up in seconds versus 1,000 servers in three days. You don’t have to play the resource game, and you can separate the developer use case from the production use case.
- Containers facilitate agility, speed, and autoscaling. Container density per host is up resulting in greater cost efficiencies. Ephemeral-ness is simple to run a stateless application in containers; however, the need to maintain state has slowed adoption. People are starting to figure that out and see more stateful applications running on containers.
- The DevOps movement has always been about delivering better software, faster. Along with CI/CD and microservices, container orchestration has offered one of the fastest mechanisms to achieve this goal. As organizations have moved to cloud-native development and platform services, the importance of stateful infrastructure has become a thing of the past. When running a flavor of Kubernetes (K8s), there is a common Infrastructure-as-Code concept that helps developers spend less time worrying about infrastructure specifics. Write a K8s Deployment or Stateful Set—and that will run pretty much the same on any supported K8s environment.
- Speed. Containers are wonderful for microservice-based architectures because they can be spun up rapidly and torn down just as quickly. This enables the development and deployment of new technologies in a matter of minutes rather than days, weeks, or months.
- There has been an evolution of the CI/CD pipeline to automate and accelerate code for development and production throughout the entire enterprise. Applications can scale much more easily. Updates and upgrades are easier. Containers have changed the go-to-market strategy by emphasizing infrastructure needs up front and delivering on the promise of being easy to scale. Customers can move tools to the cloud.
- The ability to easily fire containers with different software stacks of different versions has enabled developers to deliver a quality application in less time. Containers allow developers to scale applications artificially to test stability and robustness before releasing to customers.
- No environmental restrictions make scaling easier as parameters change dynamically. The location-agnostic, geographically distributed nature of containers gives developers freedom. Starting an application in one location, bringing it down, and restarting in another location is easier. Containers have become the framework of choice.
- When you’re orchestrating and deploying containers, your deployable units are smaller, which means you have more cats to herd and more things to manage. It’s not just about getting your monolith deployed. It’s also about visualizing all your dependencies and connections between your microservices and keeping those in check. Containers change the application deployment game by increasing the cardinality of everything you’re deploying. Whereas you were used to deploying one application, you’re now deploying ten, so your ability to administer everything has to keep up. Scaling considerations also change. With containers, scaling is no longer a concern of your deployment. That’s something your runtime takes care of. So, I’m no longer deploying to ten JBoss servers because I have a large application. I’m now deploying to one endpoint. Amazon, OpenShift, or whoever takes care that it scales out automatically. But the challenge is it still has to be managed. So, back in the day, for example, if you wanted an eleventh WebSphere machine, that was a big process, and what needed to happen was pretty straightforward, left to right. With containers, you manage on the fly, and it creates more runtime complexity versus deployment complexity.
- The key component containers have helped resolve is getting the environment isolated. This helps streamline normal perils that go along with environmental differences. Organizations with good processes are building containers off the same base image.
- Servers, containers, service-oriented architectures, so many domain services are separated easily to access a certain specification. You're able to see the changes they actually make in the environment. Between container-based technology and microservices, you can break down into too many pieces that lead to the complexity of integration. Just because you have the capability to do something doesn’t mean you should do it.
- Containers have made the whole process easier to manage. There is no need to worry about what’s installed on a particular machine. The process is self-documenting. As long as you have Docker on your machine it’s going to work. Containers make testing easier because it’s easy to spin an environment up and down.
- Containerization, done well, forces people to be a lot more diligent about defining the deployment environment. It’s driving the DevOps principle that the deployment is part of the deliverable. It also aligns with the Agile principle that systems are documented through code rather than separate MS Word documents, i.e. the deployment environment is defined by your container manifests, not some document that quickly becomes outdated.
- Containers provide an abstraction layer which allows application developers to write their application once and be able to deploy them across various environments. Application developers no longer need to program custom methods to ensure applications will restart on failure. Container orchestration now fully manages this capability.
- Containers have provided a much better abstraction point for delivering applications into production. They allow you to build a deployable artifact on your laptop and have it ostensibly be the same one running in production. This enables developers to focus more on business logic/value than on the plumbing. That’s not to say they don’t need to think about it at all, but it’s now much better abstracted.
- Containers have enabled microservices-oriented computing in a manner that would have not been possible previously. Containers have also given rise to new styles of high-performance computing by bringing repeatability and portability to a traditionally bare-metal style of computing, which allows for much easier cloud migration.
- Enterprises are developing container-based applications confident in the knowledge the container ecosystem is robust. K8s is enjoying a ubiquitous platform and cloud provider support, and success stories are common. At the same time, confidence in deploying containers in production environments has also gotten the attention of attackers, who see a wealth of targets that justify developers’ increasing focus on testing container security.
- With the introduction of containers, application developers do need to consider how they plan to distribute their applications (i.e., publish to common orchestration product service catalogs), implement security, consider if either ephemeral or persistent storage is right for their specific application, and if the application will be deployed across different geographies.
- Prior to containers if you did not have a portable unit like a container you had to build the packaging and deployment harness of that application with a configuration management tool for every application. The result is too much friction and too many attack vectors. By standardizing tooling around the deployment and management of containers, you get one-time set up of development and deploy with agility eliminating the configuration management tax for every single service.
- The advent of virtualization and Infrastructure-as-a-Service (IaaS) created the promise of infrastructure being a flexible resource under the control of adaptive software, but that promise was generally prohibitively difficult to realize because the interfaces, APIs and abstractions of those environments varied from vendor to vendor. Containers have helped to deliver on the promise of “infrastructure-as-code,” forcing applications to be designed for environments where resources can quickly change, and moving applications toward a future where they cooperatively integrate with orchestration frameworks to intelligently and adaptively expand and contract as needed.
- 1) CI/CD is giving developers the ability to release their software in a continuous and frequent way. It's beneficial for applications to continually push code since the pipeline is built for that. The test-based approach to releasing software enables developers to react faster. 2) Microservices with orchestrated containers and K8s are able to split and decompose a monolithic application into independent services to communicate through APIs. This changes how you structure development and operations teams – split into small independent groups and own specific portions of the application.
- 1) Application development has become more agile. Containers allow developers to be flexible in process delineation, best exemplified by the microservice pattern. Whereas previously an application may have been a monolith and easier to deploy since there are fewer parts, a microservice architecture creates many more artifacts. Orchestration engines like K8s facilitate the deployment and management of these containers. Before containers can be deployed, they need to be created and tested, which leads to strong adoption of CI/CD mechanisms. 2) Container-based application development drove cloud-native paradigms and application design and usually consists of a microservices based app layer that can be independently and horizontally scaled. The reliance on process containers usually leads to a strong CI/CD dependency for anything but the most trivial use cases, and thus increasingly relies on external systems for testing and code check-in. We address this challenge by providing the necessary tools locally on the engineering workstation to support the developer flow of creating Docker-style containers out of code merges and the submission to CI systems, either locally or remotely.
- Containers enable a new way of continuous integration and continuous deployment. Immutable container images, which are configured from the outside, are moving through the DevOps pipelines into production.
Here’s who we spoke to:
- Tim Curless, Solutions Principal, AHEAD
- Gadi Naor, CTO and Co-founder, Alcide
- Carmine Rimi, Product Manager, Canonical
- Sanjay Challa, Director of Product Management, Datical
- OJ Ngo, CTO, DH2i
- Shiv Ramji, V.P. Product, DigitalOcean
- Antony Edwards, COO, Eggplant
- Anders Wallgren, CTO, Electric Cloud
- Armon Dadgar, Founder and CTO, HashiCorp
- Gaurav Yadav, Founding Engineer Product Manager, Hedvig
- Ben Bromhead, Chief Technology Officer, Instaclustr
- Jim Scott, Director, Enterprise Architecture, MapR
- Vesna Soraic, Senior Product Marketing Manager, ITOM, Micro Focus
- Fei Huang, CEO, NeuVector
- Ryan Duguid, Chief Evangelist, Nintex
- Ariff Kassam, VP of Products and Joe Leslie, Senior Product Manager, NuoDB
- Bich Le, Chief Architect, Platform9
- Anand Shah, Software Development Manager, Provenir
- Sheng Liang, Co-founder and CEO, and Shannon Williams, Co-founder, Rancher Labs
- Scott McCarty, Principal Product Manager - Containers, Red Hat
- Dave Blakey, CEO, Snapt
- Keith Kuchler, V.P. Engineering, SolarWinds
- Edmond Cullen, Practice Principal Architect, SPR
- Ali Golshan, CTO, StackRox
- Karthik Ramasamy, Co-Founder, Streamlio
- Loris Degioanni, CTO, Sysdig
- Todd Morneau, Director of Product Management, Threat Stack
- Rob Lalonde, VP and GM of Cloud, Univa
- Vincent Lussenburg, Director of DevOps Strategy; Andreas Prins, Vice President of Product Development; and Vincent Partington, Vice President Cloud Native Technology, XebiaLabs
Join us in exploring application and infrastructure changes required for running scalable, observable, and portable apps on Kubernetes.
Opinions expressed by DZone contributors are their own.