Kubernetes Cluster on Amazon and Expose Couchbase Service
This post will show you how to set up and start a Kubernetes cluster on AWS, run a Docker in a Kubernetes cluster, expose pod on Kubernetes as Service, and shut down the cluster.
Join the DZone community and get the full member experience.
Join For FreeThis blog is part of a multi-part blog series that shows how to run your applications on Kubernetes. It will use Couchbase, an open source NoSQL distributed document database, as the Docker container.
The first part (Couchbase on Kubernetes) explained how to start the Kubernetes cluster using Vagrant. That is a simple and easy way to develop, test, and deploy a Kubernetes cluster on your local machine. But, this could be of limited use rather soon as the resources are constrained by the local machine. So, what do you do?
Kubernetes cluster can be installed on Amazon as well. This second part will show:
- How to set up and start the Kubernetes cluster on Amazon Web Services
- Run Docker container in the Kubernetes cluster
- Expose Pod on Kubernetes as Service
- Shut down the cluster
Here is a quick overview:
Let’s dig into the details!
Setup Kubernetes Cluster on Amazon Web Services
Getting Started on AWS EC2 provides complete instructions to start a Kubernetes cluster on Amazon. Make sure to have the pre-requisites (AWS account, AWS CLI, Full EC2 access) met before you follow these instructions.
A Kubernetes cluster can be created on Amazon as:
set KUBERNETES_PROVIDER=aws
./cluster/kube-up.sh
By default, this provisions a new VPC and a 4 node Kubernetes cluster in us-west-2a
(Oregon) with t2.micro
instances running on Ubuntu. This means 5 AMIs (one for master and 4 for the worker nodes) are created. Some properties that are worth updating:
- Set
NUM_MINIONS
environment variable to whatever number of nodes are required in the cluster. Set it to 2 if you want only two worker nodes to be created. - Each instance size is 1.1.x is
t2.micro
. SetMASTER_SIZE
andMINION_SIZE
environment variables tom3.medium
otherwise the nodes are going to crawl.
If you downloaded Kubernetes from github.com/kubernetes/kubernetes/releases, then all the values can be changed in cluster/aws/config-default.sh
.
Starting Kubernetes on Amazon shows the following log:
./kubernetes/cluster/kube-up.sh
... Starting cluster using provider: aws
... calling verify-prereqs
... calling kube-up
Starting cluster using os distro: vivid
Uploading to Amazon S3
+++ Staging server tars to S3 Storage: kubernetes-staging-0eaf81fbc51209dd47c13b6d8b424149/devel
{
"InstanceProfile": {
"InstanceProfileId": "AIPAJMNMKZSXNWXQBHXHI",
"Roles": [
{
"RoleName": "kubernetes-master",
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
]
},
"CreateDate": "2016-02-29T23:19:17Z",
"Path": "/",
"RoleId": "AROAJW7ER37BPXX5KFTFS",
"Arn": "arn:aws:iam::598307997273:role/kubernetes-master"
}
],
"Arn": "arn:aws:iam::598307997273:instance-profile/kubernetes-master",
"CreateDate": "2016-02-29T23:19:19Z",
"Path": "/",
"InstanceProfileName": "kubernetes-master"
}
}
{
"InstanceProfile": {
"InstanceProfileId": "AIPAILRAU7RF4R2SDCULG",
"Path": "/",
"Arn": "arn:aws:iam::598307997273:instance-profile/kubernetes-minion",
"Roles": [
{
"Path": "/",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Principal": {
"Service": "ec2.amazonaws.com"
}
}
],
"Version": "2012-10-17"
},
"RoleName": "kubernetes-minion",
"Arn": "arn:aws:iam::598307997273:role/kubernetes-minion",
"RoleId": "AROAIBEPV6VW4IEE6MRHS",
"CreateDate": "2016-02-29T23:19:21Z"
}
],
"InstanceProfileName": "kubernetes-minion",
"CreateDate": "2016-02-29T23:19:22Z"
}
}
Using SSH key with (AWS) fingerprint: 39:b3:cb:c1:af:6a:86:de:98:95:01:3d:9a:56:bb:8b
Creating vpc.
Adding tag to vpc-7b46ac1f: Name=kubernetes-vpc
Adding tag to vpc-7b46ac1f: KubernetesCluster=kubernetes
Using VPC vpc-7b46ac1f
Creating subnet.
Adding tag to subnet-cc906fa8: KubernetesCluster=kubernetes
Using subnet subnet-cc906fa8
Creating Internet Gateway.
Using Internet Gateway igw-40055525
Associating route table.
Creating route table
Adding tag to rtb-f2dc1596: KubernetesCluster=kubernetes
Associating route table rtb-f2dc1596 to subnet subnet-cc906fa8
Adding route to route table rtb-f2dc1596
Using Route Table rtb-f2dc1596
Creating master security group.
Creating security group kubernetes-master-kubernetes.
Adding tag to sg-308b3357: KubernetesCluster=kubernetes
Creating minion security group.
Creating security group kubernetes-minion-kubernetes.
Adding tag to sg-3b8b335c: KubernetesCluster=kubernetes
Using master security group: kubernetes-master-kubernetes sg-308b3357
Using minion security group: kubernetes-minion-kubernetes sg-3b8b335c
Starting Master
Adding tag to i-b71a6f70: Name=kubernetes-master
Adding tag to i-b71a6f70: Role=kubernetes-master
Adding tag to i-b71a6f70: KubernetesCluster=kubernetes
Waiting for master to be ready
Attempt 1 to check for master nodeWaiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
Waiting for instance i-b71a6f70 to spawn
Sleeping for 3 seconds...
[master running @52.34.244.195]
Attaching persistent data volume (vol-e072d316) to master
{
"Device": "/dev/sdb",
"State": "attaching",
"InstanceId": "i-b71a6f70",
"VolumeId": "vol-e072d316",
"AttachTime": "2016-03-02T18:10:15.985Z"
}
Attempt 1 to check for SSH to master [ssh to master working]
Attempt 1 to check for salt-master [salt-master not working yet]
Attempt 2 to check for salt-master [salt-master not working yet]
Attempt 3 to check for salt-master [salt-master not working yet]
Attempt 4 to check for salt-master [salt-master not working yet]
Attempt 5 to check for salt-master [salt-master not working yet]
Attempt 6 to check for salt-master [salt-master not working yet]
Attempt 7 to check for salt-master [salt-master not working yet]
Attempt 8 to check for salt-master [salt-master not working yet]
Attempt 9 to check for salt-master [salt-master not working yet]
Attempt 10 to check for salt-master [salt-master not working yet]
Attempt 11 to check for salt-master [salt-master not working yet]
Attempt 12 to check for salt-master [salt-master not working yet]
Attempt 13 to check for salt-master [salt-master not working yet]
Attempt 14 to check for salt-master [salt-master running]
Creating minion configuration
Creating autoscaling group
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
0 minions started; waiting
2 minions started; ready
Waiting 3 minutes for cluster to settle
..................Re-running salt highstate
Waiting for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This might loop forever if there was some uncaught error during start
up.
Kubernetes cluster created.
cluster "aws_kubernetes" set.
user "aws_kubernetes" set.
context "aws_kubernetes" set.
switched to context "aws_kubernetes".
Wrote config for aws_kubernetes to /Users/arungupta/.kube/config
Sanity checking cluster...
Attempt 1 to check Docker on node @ 52.37.172.215 ...not working yet
Attempt 2 to check Docker on node @ 52.37.172.215 ...not working yet
Attempt 3 to check Docker on node @ 52.37.172.215 ...working
Attempt 1 to check Docker on node @ 52.27.90.19 ...working
Kubernetes cluster is running. The master is running at:
https://52.34.244.195
The user name and password to use is located in /Users/arungupta/.kube/config.
... calling validate-cluster
Waiting for 2 ready nodes. 1 ready nodes, 2 registered. Retrying.
Found 2 node(s).
NAME LABELS STATUS AGE
ip-172-20-0-92.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-92.us-west-2.compute.internal Ready 56s
ip-172-20-0-93.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-93.us-west-2.compute.internal Ready 35s
Validate output:
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok nil
scheduler Healthy ok nil
etcd-0 Healthy {"health": "true"} nil
etcd-1 Healthy {"health": "true"} nil
Cluster validation succeeded
Done, listing cluster services:
Kubernetes master is running at https://52.34.244.195
Elasticsearch is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/elasticsearch-logging
Heapster is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/heapster
Kibana is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kibana-logging
KubeDNS is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kube-dns
KubeUI is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/kube-ui
Grafana is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/monitoring-grafana
InfluxDB is running at https://52.34.244.195/api/v1/proxy/namespaces/kube-system/services/monitoring-influxdb
Amazon Console shows:
Three instances are created as shown–one for the master node and two for worker nodes.
Username and password for the Kubernetes master are stored in /Users/arungupta/.kube/config
. Look for a section like:
- name: aws_kubernetes
user:
client-certificate-data: DATA
client-key-data: DATA
password: 3FkxcAURLCWBXc9H
username: admin
Run Docker Container in Kubernetes Cluster on Amazon
Now that the cluster is up and running, get a list of all of the nodes:
./kubernetes/cluster/kubectl.sh get no
NAME LABELS STATUS AGE
ip-172-20-0-92.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-92.us-west-2.compute.internal Ready 18m
ip-172-20-0-93.us-west-2.compute.internal kubernetes.io/hostname=ip-172-20-0-93.us-west-2.compute.internal Ready 18m
It shows two worker nodes.
Create a new Couchbase pod:
./kubernetes/cluster/kubectl.sh run couchbase --image=arungupta/couchbase
replicationcontroller "couchbase" created
Notice how the image name can be specified on the CLI. This command creates a Replication Controller with a single pod. The pod uses arungupta/couchbase Docker image that provides a pre-configured Couchbase server. Any Docker image can be specified here.
Get all the RC resources:
./kubernetes/cluster/kubectl.sh get rc
CONTROLLER CONTAINER(S) IMAGE(S) SELECTOR REPLICAS AGE
couchbase couchbase arungupta/couchbase run=couchbase 1 12m
This shows the Replication Controller that is created for you.
Get all the Pods:
./kubernetes/cluster/kubectl.sh get po
NAME READY STATUS RESTARTS AGE
couchbase-kil4y 1/1 Running 0 12m
The output shows the Pod that is created as part of the Replication Controller.
Get more details about the Pod:
./kubernetes/cluster/kubectl.sh describe po couchbase-kil4y
Name: couchbase-kil4y
Namespace: default
Image(s): arungupta/couchbase
Node: ip-172-20-0-93.us-west-2.compute.internal/172.20.0.93
Start Time: Wed, 02 Mar 2016 10:25:47 -0800
Labels: run=couchbase
Status: Running
Reason:
Message:
IP: 10.244.1.4
Replication Controllers: couchbase (1/1 replicas created)
Containers:
couchbase:
Container ID: docker://1c33e4f28978a5169a5d166add7c763de59839ed1f12865f4643456efdc0c60e
Image: arungupta/couchbase
Image ID: docker://080e2e96b3fc22964f3dec079713cdf314e15942d6eb135395134d629e965062
QoS Tier:
cpu: Burstable
Requests:
cpu: 100m
State: Running
Started: Wed, 02 Mar 2016 10:26:18 -0800
Ready: True
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready True
Volumes:
default-token-xuxn5:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-xuxn5
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
13m 13m 1 {scheduler } Scheduled Successfully assigned couchbase-kil4y to ip-172-20-0-93.us-west-2.compute.internal
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Pulled Container image "gcr.io/google_containers/pause:0.8.0" already present on machine
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Created Created with docker id 3830f504a7b6
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} implicitly required container POD Started Started with docker id 3830f504a7b6
13m 13m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Pulling Pulling image "arungupta/couchbase"
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Pulled Successfully pulled image "arungupta/couchbase"
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Created Created with docker id 1c33e4f28978
12m 12m 1 {kubelet ip-172-20-0-93.us-west-2.compute.internal} spec.containers{couchbase} Started Started with docker id 1c33e4f28978
Expose Pod on Kubernetes as Service
Now that our pod is running, how do I access the Couchbase server?
You need to expose it outside the Kubernetes cluster.
The kubectl expose
command takes a pod, service, or replication controller and exposes it as a Kubernetes Service. Let’s take the replication controller previously created and expose it:
./kubernetes/cluster/kubectl.sh expose rc couchbase --target-port=8091 --port=8091 --type=LoadBalancer
service "couchbase" exposed
Get more details about the Service:
./kubernetes/cluster/kubectl.sh describe svc couchbase
Name: couchbase
Namespace: default
Labels: run=couchbase
Selector: run=couchbase
Type: LoadBalancer
IP: 10.0.158.93
LoadBalancer Ingress: a44d3f016e0a411e5888f0206c9933da-1869988881.us-west-2.elb.amazonaws.com
Port: <unnamed> 8091/TCP
NodePort: <unnamed> 32415/TCP
Endpoints: 10.244.1.4:8091
Session Affinity: None
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
7s 7s 1 {service-controller } CreatingLoadBalancer Creating load balancer
5s 5s 1 {service-controller } CreatedLoadBalancer Created load balancer
The Loadbalancer
attribute Ingress gives you the address of the load balancer that is now publicly accessible.
Wait for 3 minutes to let the load balancer settle down. Access it using port 8091 and the login page for Couchbase Web Console shows up:
Enter the credentials as "Administrator" and "password" to see the Web Console:
And so, you just accessed your pod outside the Kubernetes cluster.
Shut down Kubernetes Cluster
Finally, shut down the cluster using cluster/kube-down.sh
script.
./kubernetes/cluster/kube-down.sh
Bringing down cluster using provider: aws
Deleting ELBs in: vpc-7b46ac1f
Waiting for ELBs to be deleted
All ELBs deleted
Deleting auto-scaling group: kubernetes-minion-group
Deleting auto-scaling launch configuration: kubernetes-minion-group
Deleting instances in VPC: vpc-7b46ac1f
Waiting for instances to be deleted
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-45077282 i-44077283 i-b71a6f70
Sleeping for 3 seconds...
Instances not yet deleted: i-44077283 i-b71a6f70
Sleeping for 3 seconds...
All instances deleted
Deleting VPC: vpc-7b46ac1f
Cleaning up security group: sg-308b3357
Cleaning up security group: sg-3b8b335c
Cleaning up security group: sg-e3813984
Deleting security group: sg-308b3357
Deleting security group: sg-3b8b335c
Deleting security group: sg-e3813984
Done
For a complete clean up, you still need to explicitly delete the S3 bucket where Kubernetes binaries are stored.
Enjoy!
Published at DZone with permission of Arun Gupta, DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
Comments