ECS vs. Kubernetes: Similar, but Different
ECS vs. Kubernetes: Similar, but Different
AWS ECS and Kubernetes share similarities, but they both have their own strengths and weaknesses, particularly in a production environment.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
EC2 Container Service (ECS) and Kubernetes (K8s) are solving the same problem: managing containers across a cluster of hosts. The battle between ECS and Kubernetes reminds me of the editor war between vi and Emacs: heated discussions focusing on technical quibbles and personal beliefs. The following questions will help you to choose wisely. Bear in mind that questions and answers contain my opinion on the differences between ECS and K8s based on my experiences from recent projects.
Does It Fit?
A container is an isolated element. But being able to launch containers across a cluster of hosts is only a small part of the challenge. Your container lives within a universe consisting of infrastructure and services: storage system, database, domain name service to name a few.
Where are you planning to run your containers?
- Amazon Web Services (AWS)
- Google Cloud Platform (GCP)
- other IaaS provider
Being able to integrate your container management solution into your infrastructure is key.
ECS is offering the most seamless integration between your containers and other AWS services. A few examples working out of the box:
- Assigning IAM roles to each container allows fine granular access control to other services.
- Registering containers at external load balancers (Application Load Balancer).
- Scaling EC2 instances based on cluster usage (Auto Scaling).
- Collecting logs (CloudWatch Logs).
Achieving a similar level of integration between K8s and AWS is a lot of work. For example, building a production-ready key-value store with etcd, needed for K8s, with high availability, encryption, and rolling updates took several weeks. Integrating K8s with a load balancer and domain name system was another significant obstacle.
On the other hand, K8s offers trouble-free integration with GCP. The Google Container Engine among other things provides the following:
- Distributing clusters among multiple zones for high availability.
- Scaling the cluster based on usage.
- Providing persistent disks for containers.
K8s provides the most value when using it with Google Container Engine (GKE) because of its integrations with GCP.
If you are using another IaaS provider than Amazon and Google or running your workload on-premise, K8s is your only option as ECS runs on AWS only. Building an infrastructure comparable to ECS on AWS or K8s on GCP will be a lot of work in that case.
Does It Match Your Architecture?
ECS and K8s are following different strategies for service discovery.
ECS is using load balancers for service discovery. External as well as internal services are accessible through load balancers. The Application Load Balancer (ALB) offers path- and host-based routing as well as internal or external connections.
K8s is using a different strategy. Only requests from outside the cluster are passing through a load balancer. A virtual IP provides access to internal services without the need for a load balancer.
If your microservice architecture relies heavily on service-to-service communication, K8s is offering less communication overhead. Otherwise, ECS is providing a plain and simple approach for a microservice architecture as well. Especially, if most of your services need to be accessible from the Internet.
Who Operates It?
I would advise against operating a container cluster yourself whenever possible. Or is there any significant value a Do-It-Yourself container infrastructure is adding to your business?
The cluster management provided by ECS is a fully managed service offering high availability, scalability, and security. There are no additional fees for using ECS, and it’s covered by your AWS support plan as well. You are still responsible for the underlying infrastructure consisting of EC2 and VPC though.
The Google Container Engine (GKE) is offering a managed service as well. GKE provides a managed K8s cluster including the underlying infrastructure. If your cluster consists of more than five nodes, Google charges you around USD 100 per month for managing it.
Does It Pay Off?
K8s is licensed under the Apache License 2.0 whereas ECS is a proprietary service offered by AWS. Even though, AWS published Blox, a collection of open source projects for container management and orchestration on ECS.
The K8s community is vibrant generating a lot of innovative solutions. The open source ecosystem offers flexibility. But do not expect production-ready solutions besides the K8s core.
Vendor lock-in is a popular argument within ECS vs. K8s discussions. I would argue that both ECS and K8s are locking you into their solutions. And even though K8s is open source, Google is a dominant contributor with interest in evolving and monetising their cloud platform.
Are you using AWS as infrastructure provider? Use ECS to manage and schedule your containers and benefit from a highly integrated and fully managed service.
Are you using GCP as infrastructure provider? Use Google Container Engine (GKE) offering a fully managed K8s cluster well integrated into the GCP infrastructure.
Are you running your workloads on-premise or by using another IaaS provider? Operating a K8s cluster yourself is probably your only option. Expect significant eff when building a highly available and scalable K8s cluster integrated with your existing infrastructure.
What are your thoughts about ECS vs. K8s? Let me know!
Published at DZone with permission of Andreas Wittig , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.