Container Use Cases
Container Use Cases
There are multiple vertical and at least two-dozen applications of real-world problems being solved thanks to containers.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
To gather insights on the current and future state of containers, we talked to executives from 26 companies. We asked, "What are some real-world problems being solved by the use of containers?" Here's what they told us.
- 1) Enabling organizations to transform to microservices easily by using containers. Philips spins up containers to read MRIs for anywhere from 30 seconds to five minutes – based on what’s needed versus “always on.” 2) Security use cases enable companies to quickly understand the previously unknown – an intrusion versus a virus, for example, and how to understand something that hasn’t been seen before and close as quickly as possible. 3) Ad media log files DNS attack compare traffic with history to find log lines in a strange order to shut down the DNS attack. Companies don’t know the value of data initially, but are able to understand the context later. 4) Organizations that can understand the context and take the right steps the fastest will win. Provide a better understanding of the drilling process to optimize production and save millions per minute when able to fix a drill before the bit breaks.
- 1) Enterprises need to deliver quickly. We start with our design process to get the client to focus on the problem to solve and the proper technology to solve the problem. American Airlines went through this process and was able to go from design to production in a month accessing on-premises services via a containerized cloud-native app. 2) Modernize and reduce cost/footprint by moving workloads into containers and cloud. Containerized without rearchitecting to reduce footprint and increase portability.
- European financial services are moving to open banking whereby they expose consumer level banking information to other consumers to create an ecosystem of applications. Companies can use APIs to check credit and customer history. The gateways are being built in containers so legacy systems are not exposed.
- Application mobility with containers pick and land in any location with confidence it will work. 1) A retailer with multiple outlets with slightly different infrastructure. By packaging the app in a container, it makes life simpler and easier and ensures it will work. 2) Capacity utilization spin up and spin down to take advantage of excess capacity. Financial institutions will have several active data centers running at less than 50% capacity in the event of failure of a data center. When there’s no failure, there’s a lot of excess capacity that can be used for batch jobs running in containers. 3) Custom-built, scalable, resilient cloud-native applications on high-availability infrastructure without building your own container infrastructure. Distributed application with 10, 20, or 100 microservices. PKS provides infrastructure lifecycle management. K8 is adept handling application layer availability. Know the number of pods on the node. Know when you need to spin up other nodes. Disinterested in the infrastructure layer. Will not bring the failed node back up. PKS does this with infrastructure lifecycle management. High availability is very important. Responsible for maintaining, patching, upgrading, and deploying. PKS automates this.
- 1) API Gateways, which need scale-out stateless environments, with the ability to spawn multiple instances for testing/development. 2) DB servers where the DB software is licensed based on physical hardware; you can get greater efficiency, meaning more DB instances from the licensed hardware. 3) Legacy Windows VMs where the underlying OS is no longer supportable, but the app must remain long term.
- Verticals are not as interesting as new ways of deploying. There are specific application delivery use cases. We reduce the need for complex upgrades with ephemeral updates. See all demands. More consistency with abstraction. Reduce systems under test. Platform support matrix becomes simpler.
- 1) “Lift and shift” or “move and improve” application into a container. Use a DevOps methodology and tooling of new capabilities of the team. 2) Customer-facing web apps, business applications, autoscaling, agility. Move front end to the container more easily than the backend. Helps scalability and flexibility. 3) Early stage AI/ML workloads run in the cloud enabling when and where to connect into new edge IoT computing. 4) Serverless on top of containers and K8.
- 1) We provide software that runs on Linux systems. Fragmentation is really a BIG problem. We had basically given up trying to provide a “standard” distribution of our software; when someone wanted the Linux version we had to sell them some services days to basically create a package for them that would work on their flavor of Linux. Containers have gone a HUGE way to addressing this. 2) Windows has less of a fragmentation problem than Linux, but it’s still significant and a source of issues, so again this is helping there. 3) It’s removing the need to make several of our systems “multi-tenant”. Rather than making them multi-tenant we just use the lightweight container frameworks to simply run up multiple instances.
- We provide containers at the edge of IoT in a secure, manageable way. Provide infrastructure with SD Wanlike with hypervisors. Completely support containers to make it easy for customers to talk with us. We can express workloads in any way.
- 1) Containers at the edge run Docker everywhere to quickly move immutable containers close to the machines they are predicting failures for. This enables real-time analytics with Docker management tools and a small footprint. 2) Deploy machine learning in a production environment. Libraries in Python and R. We are able to get the exact environment needed to run data science and machine learning at scale and integrate into cloud operations.
- First of all, there is infrastructure cost. Customers are using containers to gain significant cost savings through improved resource utilization and workload portability. Scalability and development agility are all real benefits which enable enterprises to meet customer requirements better. We also see customers who are reducing vendor lock-in while increasing availability in different regions by scaling containers into different clouds.
- Containers accelerate the training of machine learning models by making it easy to scale training jobs. https://blog.openai.com/infrastructure-for-deep-learning/.
- In the past year, we see more clients wanting to move to a stateful workload and run on K8. We’ve also seen the rise of FaaS. 1) Integration and glue of capabilities across the K8 ecosystem. Fission customers are running on K8 with microservices using functions for the customer to provide custom business logic with custom IT and CRM. 2) FaaS as an application platform. 26 pages of application logic needed to manage with a simpler application platform using FaaS on top of K8.
- Automating continuous delivery of cloud-native applications.
- We helped a client with Splunk deployed with 100 nodes incorporate into a platform to manage and execute analytics across multiple Splunk instances for unified analytics while handling security, authorization, and access. IT maturity and use cases track unauthorized access and use with machine and human data.
- Hybrid traditional monolithic apps as they migrate or add new functions – we make it easy to build the pipeline around workflows, greenfield or brownfield and treat as one entity. Pure greenfield is likely building a microservices based app. There are several delivery alternatives, K8, Docker, Open Shift. We provide access to deploy without learning all of the APIs. Our plug-ins help hide the complexities. Build once and deploy anywhere while simplifying the learning curve.
- For us, the biggest problem being solved is that of optimizing resource usage. In a virtualized environment, hosts must set resources to the usage at the peak saturation since spinning up new hosts is not a quick operation. In a containerized environment, we have containers that can respond more dynamically and spin up and spin down as required. This means we use only the resources we actually need.
- We often encounter two popular use cases with Enterprise IT. 1) The first is the use of container platforms to reduce cloud costs, whereby instead of running low duty-cycle workloads in individual cloud compute instances (which results in low use of paid resources), enterprises instead use containers to achieve high workload density over the container host instance, thereby acting as a shared resource which allows IT to achieve higher use rates and lower compute costs. 2) The second use case is the use of a container orchestration platform such as Kubernetes to increase the management and operations efficiency when managing a large application inventory. Kubernetes offers the option to manage based on desired state configuration, which enables IT to focus on optimizing application specifications and desired behaviors, while letting the orchestration platform do the hard work of establishing and maintaining that desired state through the various lifecycle events of the underlying infrastructure or through application upgrades.
- Biggest use case is visibility of images in the registry, see vulnerabilities – pinned, repos not updated.
- Containers allow for stateless services which in turn is greatly helpful in auto-scaling environments based on demands/needs. If all container services can be clustered, then environments can be expanded or contracted based on load in an automated fashion with thresholds being changed based on collected metrics. We can focus more on improving code and less on infrastructure.
We specialize in setting up a global security posture with compliance benchmarks and security best practices. We enable you to see your current security posture in real time and know what steps you can take to reduce your risk profile at the cloud, container, and OS level.
Here’s who we spoke to:
- Matt Chotin, Sr. Director of Technical Evangelism, AppDynamics
- Jeff Jensen, CTO, Arundo Analytics
- Jaime Ryan, Senior Director, Project Management and Strategy, CA Technologies
- B.G. Goyal, V.P. of Engineering, Cavirin Systems
- Tasha Drew, Product Manager, Chef
- James Strachan, Senior Architect, CloudBees
- Jenks Gibbons, Enterprise Sales Engineer, CloudPassage
- Oj Ngo, CTO and Co-founder, DH2i
- Anders Wallgren, CTO, Electric Cloud
- Navin Ganeshan, Chief Product Officer, Gemini Data
- Carsten Jacobsen, Developer Evangelist, Hyperwallet
- Daniel Berg, Distinguished Engineer Cloud Foundation Services, IBM
- Jack Norris, S.V.P. Data and Applications, MapR
- Fei Huang, CEO, NeuVector
- Ariff Kassam, V.P. Product, NuoDB
- Bob Quillan, V.P. Container Group, Oracle
- Sirish Raghuram, CEO and Co-founder, Platform9
- Neil Cresswell, CEO/CTO, Portainer.io
- Sheng Liang, Co-founder and CEO and Shannon Williams, Co-founder and VP of Sales, Rancher Labs
- Bill Mulligan, Container Success Orchestrator, RiseML
- Martin Loewinger, Director of SaaS Operations and Jonathan Parrilla, DevOps Engineer, SmartBear
- Antony Edwards, CTO, TestPlant
- Ady Degany, CTO, Velostrata
- Paul Dul, V.P. Product Marketing Cloud Native Applications, VMware
- Mattius McLaughlin, Engineering Manager & Containers SME, xMatters
- Roman Shoposhnik, Co-founder, Product & Strategy, Zededa
Opinions expressed by DZone contributors are their own.