Thanks to Ranga Rajagopalan, CTO and Co-founder, and Chandra Sekar, V.P. Marketing at Avi Networks, for sharing their thoughts on the state of orchestration and deployment of containers for DZone’s upcoming Containers Research Guide that will be published in early-August.
Q: How is your company involved in the orchestration and deployment of containers?
A: Avi delivers software-defined application services including service discovery, service dependency graphs, elastic load balancing, autoscaling, application performance insights, and micro-segmentation for container-based applications.
Q: What do you see as the most important elements of orchestrating and deploying containers?
A: Delivering container networking services with traditional data center networking solutions such as hardware load balancers is challenging - it is an architectural mismatch. Container-based microservices architectures simplify development by breaking down the application functional components, but the explosion in the number of endpoints requires new approaches to essential functions such as service discovery, East-West traffic management, security, and performance monitoring.
Avi Networks delivers a distributed service mesh that is centrally managed and can efficiently and affordably handle networking services for containerized apps. In addition to delivering service discovery (DNS and IPAM) for container applications, automating services with a REST API enabled controller and visibility to traffic enables service dependency graphs, application performance management (APM) and security, including the micro-segmentation of the interactions. In addition, since the platform is software-defined with proxies deployed in every node in the container cluster, it provides both global and local load balancing.
Q: Which programming languages, frameworks, and tools do you or your company use to orchestrate and deploy containers?
A: The most common container orchestration frameworks that we see Kubernetes and the Red Hat OpenShift platform. We also see customers deploy new applications such as Kafka and Cassandra at the leading edge of the deployment of containers.
Q: How has the orchestration and deployment of containers changed application development?
A: Traditional applications were built as monolithic entities with multiple services combined and deployed on large scale server hardware. Network services were often delivered by a single purpose-built appliance and managed by the IT department. The process of managing and configuring load balancers, security, visibility, and performance between clients was challenging and delayed application rollouts.
With container-based applications, apps that were once static and immovable are now broken down into light, manageable parts, or containers. Containers can fundamentally improve the speed of development and agility of app deployments. Enterprises that rely on revenue-generating applications find this powerful.
Q: What kind of security techniques and tools do you find most effective for orchestrating and deploying containers?
A: Many of the security needs that apply to traditional applications have to be addressed with container applications as well. However, container-based applications also introduce the need to protect East-West traffic and interactions behind the perimeter as well as the complexity of applying security policies to transient workloads and endpoints. Container-based applications will need firewalling capabilities to protect incoming accesses and transactions in addition to micro-segmentation capabilities to secure transactions between microservices in the cluster. Micro-segmentation rules should apply at the service level and not to individual IP addresses since they can change at any time rendering policies stale.
Q: What are some real-world problems being solved by the orchestration and deployment of containers?
A: We have several customers in production. Let's take the example of two large global banks. Both are using Red Hat Open Shift platform. One is running its initial set of applications on Kafka and is challenged with security and traffic management. We helped them secure everything with credentials, authentications, keys, roles, audits, and access. We secured network traffic and authentication. They began running their first set of apps in December and are now running more than 100. The second bank is more traditional using VMware for their applications. We helped them address issues with scale and persistence of data. They began running their first app in March. Both cases have multiple clusters with full DevOps CI/CD pipeline.
Q: What are the most common issues you see affecting the orchestration and deployment of containers?
A: Security and ability to store data with persistence. Containers are great in a test environment but they can be tougher to roll out into production and scale to get enterprise-grade services. There’s a gap between the lab and production where assistance is needed with modern applications.
Q: Do you have any concerns regarding the current state of orchestrating and deploying containers?
A: The actual capabilities lag the hype by a couple of years.
Q: What’s the future for containers from your point of view - where do the greatest opportunities lie?
A: 1) There’s an inflection point where virtual machines were 10 years ago. Convenient development and packaging for developers spreading into IT and operations. 2) Fits with the immutable image of deployment – reliable and consistent every time. 3) Efficient for resource allocation. 4) Elastic – able to use what you need when you need it. This forces developers to think about containers as ephemeral and interchangeable.
Q: What do developers need to keep in mind when working on orchestrating and deploying containers?
A: Developers need to work with operations early in the development process to identify the architecture and testing. Figure out from end-to-end how the application will work.