Thanks to Luca Ravassolo, Product Manager at InterSystems for sharing his thoughts on the current state of orchestration and deployment of containers.
Q: How is your company involved in the orchestration and deployment of containers?
A: InterSystems is directly involved in several aspects of a container’s lifecycle. We provide our new data platform product as a first-class citizen of the container world and a tool that makes deployments very easy. The InterSystems tool supports full infrastructure provisioning for the major public and private clouds as well as container deployment and management.
InterSystems is directly involved in the handling of its containers as they have fundamental importance in any application solution our customers build due to the fact that containers hold stateful information. Factors such as high-availability configuration, data replication, horizontal scalability, container image upgrades, etc. are directly managed.
Q: What do you see as the most important elements of orchestrating and deploying containers?
A: Cloud-native architectures are becoming the new normal topology. What we find in this new world is the componentization of application services or the rise of microservices architecture. This leads to a proliferation of containers. There are four important elements that need to be factored into a full orchestration solution:
The first element is resource awareness. Not only should an orchestrator be aware of the resources that are in use at all times, but also of all those that may be available for use should the need arise. The Mesos project and its Mesosphere implementation come to mind with its nickname of data-center operating system (DC/OS). In our opinion there needs to be a more active understanding of the available resources and new dynamic ways to manage their creation.
The second fundamental element is an application-oriented focus. Containers have effectively changed the DC focus from being server-oriented to being application-oriented. However, in a world of microservices, we find ourselves with many containers and, in general, no clear way to understand what comprises an application. For instance, is simple tagging or Kubernetes POD abstraction truly sufficient? There are other aspects to this, but in general, it is important to understand that each service is not isolated and is part of an ecosystem of services that define the application. An ideal orchestrator would need to be sensitive enough to the many services and application dependencies created in a cloud cluster solution so that inopportune actions may never take place when patching, upgrading, etc. occurs. Within the industry as a whole, there are still missing steps to having a fully controlled cluster-wide application configuration management.
The third element relates to service dependencies. When I start a service I usually need a series of corollary services like storage, network, monitoring, CI/CD provisioning processes, injected environment variables, etc. available to make my service useful. An application component (or service) then interacts and needs other particular services to function. In a microservices architecture, that translates to a minimum of the following needs:
Network services: protocols, exposed IP, sockets, etc.
Volumes: IO subsystem for read and writes
Environment variables for configuring the service
These dependencies are difficult to capture and maintain as we move across environments mainly because the local developer is most likely spinning up a cluster environment via docker-compose while the quality assurance department has a more formal process and a more recognized orchestrator or custom solution. This occurs while the organization uses more established enterprise orchestrators like Mesospshere or Kubernetes in the production phase.
The fourth element is the wish to see a much tighter integration with volume plugins and hyper-converged storage solutions, which is linked to resource awareness, direct APIs support, monitoring, and capacity planning. These are all aspects that appear almost extraneous or unrelated to the orchestrator itself, however, as technologies improve they can not be ignored. At the end of the day, we are trying to operate in a cloud-like modus operandi which means we will inevitably create self-healing autonomic solutions.
Q: Which programming languages, frameworks, and tools do you, or your company use, to orchestrate and deploy containers?
A: InterSystems developed its own tool in Java leveraging Hashicorp Terraform. It enables integration and aids generic orchestrators in the important job of handling a stateful, multi-model data platform.
Q: How has the orchestration and deployment of containers changed application development?
A: InterSystems has several customers that use containers. The approach each takes varies. On one extreme, customers are re-thinking their application development strategy and are rewriting the back end while exploring known orchestrators. Other customers have a more traditional approach and use containers for the intrinsic build, dependencies and satisfaction value they offer. These customers will have fewer containers resulting in a much easier environment to manage. It is doubtful that some will invest in high-end complex orchestrators. Simpler solutions like Rancher might be of value though.
Q: What kind of security techniques and tools do you find most effective for orchestrating and deploying containers?
A: Secret management tools like Hashicorp Vault are very useful and should be featured natively in orchestrators.
Q: What are some real-world problems being solved by the orchestration and deployment of containers? (Use cases you like to highlight.)
A: Automation of cluster management adhering to the promise theory and a tendency - as technologies mature and integrate further - to evolve into autonomic computing.
Q: What are the most common issues you see affecting the orchestration and deployment of containers?
A: One of the most common issues that arises that impacts the orchestration and deployment of containers is a lack of understanding of the orchestrator staging (and questions organizations should ask):
Where do containers come from? They should come from a provisioning pipeline.
How can I test and verify this new application solution comprising all my container images AND the orchestrator I use in production?
How are errors fed back?
How are orchestrators made aware of errors and how are they handling them? There needs to be a tighter integration with monitoring solutions.
Orchestrators need to be tightly coupled with monitoring solutions to be effective in the long run for the enterprise.
As InterSystems is a data platform vendor we believe a tighter integration between orchestrators and data related concerns (i.e.volume plugins, hyper-converged storage, snapshots and backup technologies) would be beneficial to organizations.
Q: What do developers need to keep in mind when working on orchestrating and deploying containers?
From InterSystems’ perspective, the main issue is maintaining stateful data. Ephemeral container services like web servers are easy to bring down and spin up. External dependencies can be zero.