Starting a Kubernetes 1.5.x Cluster
With the latest Kubernetes release out for a month, let's look at how you can deploy a cluster over AWS while taking the latest changes and scripts into account.
Join the DZone community and get the full member experience.
Join For FreeKubernetes 1.5.0 was released just about a month ago! The key themes for the release are:
- StatefulSets (ex-PetSets)
- StatefulSets are beta now (fixes and stabilization)
- Improved Federation Support
- New command:
kubefed
- DaemonSets
- Deployments
- ConfigMaps
- New command:
- Simplified Cluster Deployment
- Improvements to
kubeadm
- HA Setup for Master
- Improvements to
- Node Robustness and Extensibility
- Windows Server Container support.
- CRI for pluggable container runtimes.
kubelet
API supports authentication and authorization
Read CHANGELOG for complete details.
Up until 1.5.0, starting up a Kubernetes cluster on Amazon Web Services was pretty straight forward.
NUM_NODES=2 NODE_SIZE=m3.medium KUBERNETES_PROVIDER=aws ./cluster/kube-up.sh
But with 1.5.0 and 1.5.1, the command fails with the error:
... Starting cluster in us-west-2a using provider aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: jessie
!!! Cannot find kubernetes-server-linux-amd64.tar.gz
What happened?
Basically, the Kubernetes binary was getting bigger than 1GB. The binary was broken into a basic install bundle and client and server binaries. The updated installation process requires us to download the basic install bundle of 4.57 MB (yes, MB instead of GB). It includes cluster scripts like kubectl
, kube-up.sh
, and kube-down.sh
, examples, docs, and other scripts. This then downloads client and server binaries. The server binary is the base image that is used to start EC2 instances. But instead of automating the download of binaries, somebody decided to add a README in the server
directory.
This was a big user experience change, and there are no links in the README bundled with the release or the release blog. Ouch!
Anyway, this was filed as #38728 and fixed promptly. But it missed the 1.5.1 release and now finally showed up in the 1.5.2 release today.
So, how do you run a Kubernetes 1.5.2 cluster on AWS?
It is more seamlessly integrated, now but you need to hit enter a couple of times to accept the default value:
NUM_NODES=2 NODE_SIZE=m3.medium KUBERNETES_PROVIDER=aws ./cluster/kube-up.sh
... Starting cluster in us-west-2a using provider aws
... calling verify-prereqs
... calling verify-kube-binaries
!!! kubectl appears to be broken or missing
!!! Cannot find kubernetes-server-linux-amd64.tar.gz
Required binaries appear to be missing. Do you wish to download them? [Y/n]
<ENTER>
Kubernetes release: v1.5.2
Server: linux/amd64 (to override, set KUBERNETES_SERVER_ARCH)
Client: darwin/amd64 (autodetected)
Will download kubernetes-server-linux-amd64.tar.gz from https://storage.googleapis.com/kubernetes-release/release/v1.5.2
Will download and extract kubernetes-client-darwin-amd64.tar.gz from https://storage.googleapis.com/kubernetes-release/release/v1.5.2
Is this ok? [Y]/n
<ENTER>
Warning: Keep-alive functionality somewhat crippled due to missing support in
Warning: your operating system!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 299M 100 299M 0 0 2132k 0 0:02:23 0:02:23 --:--:-- 2439k
md5sum(kubernetes-server-linux-amd64.tar.gz)=7947bd430c4ffc358a6784e51c1d2b0f
sha1sum(kubernetes-server-linux-amd64.tar.gz)=4dbdcfa623412dac6be8fd5a4209a1f1423e8d30
Warning: Keep-alive functionality somewhat crippled due to missing support in
Warning: your operating system!
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 22.0M 100 22.0M 0 0 1810k 0 0:00:12 0:00:12 --:--:-- 2296k
md5sum(kubernetes-client-darwin-amd64.tar.gz)=f55a8f9c300042e9b16e327ad2788521
sha1sum(kubernetes-client-darwin-amd64.tar.gz)=c29ab99e22146ba0a3da5c25de62ed13108b8ba9
Extracting /Users/arungupta/tools/kubernetes/kubernetes-1.5.2/kubernetes/client/kubernetes-client-darwin-amd64.tar.gz into /Users/arungupta/tools/kubernetes/kubernetes-1.5.2/kubernetes/platforms/darwin/amd64
Add '/Users/arungupta/tools/kubernetes/kubernetes-1.5.2/kubernetes/client/bin' to your PATH to use newly-installed binaries.
... calling kube-up
Starting cluster using os distro: jessie
Uploading to Amazon S3
...
After the usual Kubernetes cluster is created, the output is shown as:
0 minions started; waiting
0 minions started; waiting
2 minions started; ready
Waiting for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This might loop forever if there was some uncaught error during start
up.
.........................................................................................................................................................................Kubernetes cluster created.
Sanity checking cluster...
Attempt 1 to check Docker on node @ 35.166.195.134 ...not working yet
Attempt 2 to check Docker on node @ 35.166.195.134 ...working
Attempt 1 to check Docker on node @ 35.166.188.211 ...not working yet
Attempt 2 to check Docker on node @ 35.166.188.211 ...working
Kubernetes cluster is running. The master is running at:
https://35.165.234.219
The user name and password to use is located in /Users/arungupta/.kube/config.
... calling validate-cluster
No resources found.
Waiting for 2 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 2 ready nodes. 0 ready nodes, 2 registered. Retrying.
Waiting for 2 ready nodes. 0 ready nodes, 2 registered. Retrying.
Found 2 node(s).
NAME STATUS AGE
ip-172-20-0-206.us-west-2.compute.internal Ready 45s
ip-172-20-0-246.us-west-2.compute.internal Ready 42s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://35.165.234.219
Elasticsearch is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/kube-dns
kubernetes-dashboard is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard
Grafana is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://35.165.234.219/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Even though your Kubernetes cluster on AWS starts up fine, the kube-up.sh
script is going to be deprecated soon. The recommended way is to use Kubernetes Cluster on Amazon using Kops.
Now that your Kubernetes cluster is up, what do you do next?
- Follow the detailed steps for Kubernetes for Java Developers workshop.
- Run a Couchbase cluster in Kubernetes
- Learn more about Couchbase cluster in Containers
Published at DZone with permission of Arun Gupta, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments