Over a million developers have joined DZone.
{{announcement.body}}
{{announcement.title}}

Using Kubernetes on Google GKE

DZone's Guide to

Using Kubernetes on Google GKE

The Google Container Engine (GKE) is a cloud service available on Google Cloud that offers on-line Kubernetes clusters. This new service allows you to create a Kubernetes cluster on-demand using the Google API. Read on and see how it's done.

· Cloud Zone
Free Resource

Linkerd, the open source service mesh for cloud native applications. Get the complete guide to using Linkerd and Kubernetes to build scalable, resilient applications.

The Google Container Engine (GKE) is a cloud service available on Google Cloud that offers on-line Kubernetes clusters. This new service allows you to create a Kubernetes cluster on-demand using the Google API. A cluster will be composed of a master node and a set of compute nodes that act as container VMs.

If you prefer to watch a short screencast of this blog go to the  YouTube channel of Skippbox and watch this:


You will need an account on Google Cloud Platform. Update your gcloud SDK to use the container engine preview. If you have not yet installed the Google SDK, do it now:

$ curl https://sdk.cloud.google.com | bash

You can now log into the Google Cloud and update the components of the CLI to be sure you have the latest version.

$ gcloud auth login
$ gcloud components update

Install the kubectl Kubernetes client:

$ gcloud components install kubectl

Starting a Kubernetes cluster using the GKE service requires a single command:

$ gcloud container clusters create cook --num-nodes 1 --machine-type g1-small
Creating cluster cook...done.
Created [https://container.googleapis.com/v1/projects/sylvan-plane-862/zones/ \
us-central1-f/clusters/cook].
kubeconfig entry generated for cook.
NAME ZONE MASTER_VERSION ... STATUS
cook us-central1-f 1.0.3 ... RUNNING

Your cluster IP addresses, project name, and zone will differ from what is shown here. What you do see is that a Kubernetes configuration file, kubeconfig, was generated for you. It is located at ~/.kube/config and contains the endpoint of your container cluster as well as the credentials to use it.

You could also create a cluster through the Google Cloud web console:

containerengine

Once your cluster is up, you can submit containers to it—meaning that you can interact with the underlying Kubernetes master node to launch a group of containers on the set of nodes in your cluster. Groups of containers are defined as _pods_. The gcloud CLI gives you a convenient way to define simple pods and submit them to the cluster. Next, you are going to launch a container using the __tutum/wordpress__ image, which contains a MySQL database. When you installed the gcloud CLI, it also installed the Kubernetes client kubectl. You can verify that kubectl is in your path. It will use the configuration that was autogenerated when you created the cluster. This will allow you to launch containers from your local machine on the remote container cluster securely:

$ kubectl run wordpress --image=tutum/wordpress --port=80
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
wordpress-0d58l 1/1 Running 0 1m

Once the container is scheduled on one of the cluster nodes, you need to create a Kubernetes service to expose the application running in the container to the outside world. This is done again with kubectl:

$ kubectl expose rc wordpress --type=LoadBalancer
NAME LABELS SELECTOR IP(S) PORT(S)
wordpress run=wordpress run=wordpress 80/TCP

The expose command creates a Kubernetes service (one of the three Kubernetes primitives with pods and replication controllers) and it also obtains a public IP address from a load-balancer. The result is that when you list the services in your container cluster, you can see the wordpress service with an internal IP and a public IP where you can access the WordPress UI from your laptop:

$ kubectl get services
NAME ... SELECTOR IP(S) PORT(S)
wordpress ... run=wordpress 10.95.252.182 80/TCP
                            104.154.82.185

You will then be able to enjoy WordPress.
The kubectl CLI can be used to manage all resources in a Kubernetes cluster (i.e., pods, services, replication controllers, nodes). As shown in the following snippet of the kubectl usage, you can create, delete, describe, and list all of these resources:

$ kubectl -h
kubectl controls the Kubernetes cluster manager.

Find more information at https://github.com/kubernetes/kubernetes.

Usage: 
 kubectl [flags]
 kubectl [command]

Available Commands: 
 get Display one or many resources
 describe Show details of a specific resource ...
 create Create a resource by filename or stdin
 replace Replace a resource by filename or stdin.
 patch Update field(s) of a resource by stdin.
 delete Delete a resource by filename, or ...
...

Although you can launch simple pods consisting of a single container, you can also specify a more advanced pod defined in a JSON or YAML file by using the -f option:

$ kubectl create -f /path/to/pod/pod.json

A pod can be described in YAML. Here, let's write your pod in a JSON file, using the newly released Kubernetes _v1_ API version. This pod will start Nginx:

{
 "kind": "Pod",
 "apiVersion": "v1",
 "metadata": {
   "name": "nginx",
   "labels": {
     "app": "nginx"
   }
 },
 "spec": {
   "containers": [
     {
     "name": "nginx",
     "image": "nginx",
     "ports": [
       {
         "containerPort": 80,
         "protocol": "TCP"
       }
     ]
     }
   ]
 }
}

Start the pod and check its status. Once it is running and you have a firewall with port 80 open for the cluster nodes, you will be able to see the Nginx welcome page. Additional examples are available on the Kubernetes GitHub page.

$ kubectl create -f nginx.json
pods/nginx
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 20s
wordpress 1/1 Running 0 17m

To clean things up, remove your pods, exit the master node, and delete your cluster:

$ kubectl delete pods nginx
$ kubectl delete pods wordpress
$ gcloud container clusters delete cook

That is it ! You have created a Kubernetes cluster in the Google Cloud and you have launched your first containers as a pod. You can now experiment with replication controllers and more advanced examples of applications. Enjoy.

Linkerd, the open source service mesh for cloud native applications. Get the complete guide to using Linkerd and Kubernetes to build scalable, resilient applications.

Topics:
container ,docker ,google container engine ,kubernetes

Published at DZone with permission of Sebastien Goasguen, DZone MVB. See the original article here.

Opinions expressed by DZone contributors are their own.

THE DZONE NEWSLETTER

Dev Resources & Solutions Straight to Your Inbox

Thanks for subscribing!

Awesome! Check your inbox to verify your email so you can start receiving the latest in tech news and resources.

X

{{ parent.title || parent.header.title}}

{{ parent.tldr }}

{{ parent.urlSource.name }}