Over a million developers have joined DZone.

Deploying Kubernetes on CoreOS to AWS

Follow along to set up a three-node Kubernetes cluster on CoreOS for your next project using containers.

· Cloud Zone

Build fast, scale big with MongoDB Atlas, a hosted service for the leading NoSQL database on AWS. Try it now! Brought to you in partnership with MongoDB.

Linux containers are a relatively new abstraction framework with exciting implications for Continuous Integration and Continuous Delivery patterns. They allow appropriately designed applications to be tested, validated, and deployed in an immutable fashion at much greater speed than with traditional virtual machines. When it comes to production use however, an orchestration framework is desirable to maintain a minimum number of container workers, load balance between them, schedule jobs, and the like.

An extremely popular method of doing this is to use AWS EC2 Container Service (ECS) with the Amazon Linux distribution. However, if you find yourself making the “how do we run containers” decision, then it pays to explore other technology stacks as well.

In this demo, we’ll launch a Kubernetes cluster of CoreOS on AWS. Kubernetes is a container orchestration platform that utilizes Docker to provision, run, and monitor containers on Linux. It is developed primarily by Google and is similar to container orchestration projects they run internally. CoreOS is a lightweight Linux distribution optimized for container hosting. Related projects are Docker Inc’s “Docker Datacenter,” RedHat’s Atomic, and RancherOS.


  1. Download the Kubernetes package. Releases can be found here. This demo assumes you're using version 1.1.8.

  2. Download the coreos-kubernetes package. Releases are here. This demo assumes version 0.4.1.

  3. Extract both. Core-kubernetes provides a kube-aws binary used for provisioning a kube cluster in AWS (using CloudFormation), while the Kubernetes package is used for its kubectl binary

  4. tar xzf kubernetes.tar.gz
    tar xzf kube-aws-PLATFORM-ARCH.tar.gz (kube-aws-darwin-amd64.tar.gz for Macs, for instance)

  5. Set up your AWS credentials and profiles.

  6. Generate a KMS key to use for the cluster (change region from us-west-1 if desired, but you will need to change it everywhere). Make a note of the ARN of the generated key, it will be used in the cluster.yaml later

  7. aws kms --region=us-west-1 create-key --description="kube-aws assets"

  8. Get a sample cluster.yaml. This is a configuration file later used for generating the AWS CloudFormation scripts and associated resources used to launch the cluster.

  9. mkdir my-cluster; cd my-cluster
    ~/darwin-amd64/kube-aws init --cluster-name=YOURCLUSTERNAME \
     --external-dns-name=FQDNFORCLUSTER --region=us-west-1 \
     --availability-zone=us-west-1c --key-name=VALIDEC2KEYNAME \

  10. Modify the cluster.yaml with appropriate settings. “externalDNSName” wants a FQDN that will either be configured automatically if you provide a Route53 zone id for “hostedZoneId” or that you will configure AFTER provisioning has completed. This becomes the kube controller endpoint used by the Kubernetes control tooling.

    Note that a new VPC is created for the Kubernetes cluster unless you configure it to use an existing VPC. You specify a region in the cluster.yaml, and if you don’t specify an Availability Zone then the “A” AZ will be used by default.

  11. Render the CFN templates, validate, then launch the cluster.

    ~/darwin-amd64/kube-aws render
    ~/darwin-amd64/kube-aws validate
    ~/darwin-amd64/kube-aws up

    This will set up a short-term Certificate Authority (365 days) and SSL certs (90 days) for communication and then launch a cluster into CloudFormation. It will also store data about the cluster for use with kubectl.

  12. After the cluster has come up, an EIP will be output. Assign this EIP to the FQDN you used for externalDNSName in cluster.yaml if you did not allow kube-aws to configure this automatically via Route53. This is important, as it’s how the tools will try to control the cluster.

  13. You can then start playing with the cluster. My sample session:

# Display active Kubernetes nodes
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get nodes
ip-10-0-x-x.us-west-1.compute.internal Ready,SchedulingDisabled 19m
ip-10-0-x-x.us-west-1.compute.internal Ready 19m
ip-10-0-x-x.us-west-1.compute.internal Ready 19m
# Display name and EIP of the cluster
~/darwin-amd64/kube-aws status
Controller IP: a.b.c.d
# Launch the "nginx" Docker image as container instance "my-nginx"
# 2 replicas, wire port 80
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig run my-nginx --image=nginx --replicas=2 --port=80
deployment "my-nginx" created
# Show process list
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get po
my-nginx-2494149703-2dhrr 1/1 Running 0 2m
my-nginx-2494149703-joqb5 1/1 Running 0 2m
# Expose port 80 on the my-nginx instances via an Elastic Load Balancer
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig expose deployment my-nginx --port=80 --type=LoadBalancer
service "my-nginx" exposed
# Show result for the service
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig get svc my-nginx -o wide
my-nginx 10.x.0.y LONGELBHOSTNAME.us-west-1.elb.amazonaws.com 80/TCP 3m run=my-nginx
# Describe the my-nginx service. This will show the CNAME of the ELB that
# was created and which exposes port 80
~/kubernetes/platforms/darwin/amd64/kubectl --kubeconfig=kubeconfig describe service my-nginx
Name: my-nginx
Namespace: default
Labels: run=my-nginx
Selector: run=my-nginx
Type: LoadBalancer
IP: 10.x.0.y
LoadBalancer Ingress: LONGELBHOSTNAME.us-west-1.elb.amazonaws.com
Port: <unset> 80/TCP
NodePort: <unset> 31414/TCP
Endpoints: 10.a.b.c:80,10.d.e.f:80
Session Affinity: None
 FirstSeen LastSeen Count From SubobjectPath Type Reason Message
 --------- -------- ----- ---- ------------- -------- ------ -------
 4m 4m 1 {service-controller } Normal CreatingLoadBalancer Creating load balancer
 4m 4m 1 {service-controller } Normal CreatedLoadBalancer Created load balancer

Thus we have created a three-node Kubernetes cluster (one controller, two workers) running two copies of the NGINX container. We then set up an ELB to balance traffic between the instances.

Kubernetes certainly has operational complexity to trade off against features and robustness. There are a lot of moving parts to maintain, and at Stelligent we tend to recommend AWS ECS for situations in which it will suffice.


Related Refcard:

Now it's easier than ever to get started with MongoDB, the database that allows startups and enterprises alike to rapidly build planet-scale apps. Introducing MongoDB Atlas, the official hosted service for the database on AWS. Try it now! Brought to you in partnership with MongoDB.


Published at DZone with permission of Jeff Bachtel, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

The best of DZone straight to your inbox.

Please provide a valid email address.

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}