Gluster Storage for Kubernetes With Heketi
Gluster Storage for Kubernetes With Heketi
Running stateful applications on Kubernetes is doable, and we'll use Gluster, Heketi, and Google Cloud Platform to see it in action.
Join the DZone community and get the full member experience.Join For Free
See why enterprise app developers love Cloud Foundry. Download the 2018 User Survey for a snapshot of Cloud Foundry users’ deployments and productivity.
A key point when running containers using Kubernetes is managing stateful applications, such as a database. We'll use Gluster along with Kubernetes in this post to demonstrate how you can run stateful applications on Kubernetes
GlusterFS and Heketi
GlusterFS is an open-source scalable network FileSystem that can be created using off the shelf hardware. Gluster allows the creation of various types of volumes such as Distributed, Replicated, Striped, Dispersed, and many combinations of these as described in detail here.
Heketi is a Gluster Volume manager that provides a RESTful interface to create/manage Gluster volumes. Heketi makes it easy for cloud services such as Kubernetes, OpenShift, and OpenStack Manila to interact with Gluster clusters and provision volumes as well as manage brick layout.
The purpose of this post is to demonstrate how to create a GlusterFS cluster, manage this cluster using Heketi to provision volumes, and then install a demo application to use the Gluster volume.
We will create a 4-node Kubernetes cluster with two unformatted disks on each node. We shall then install GlusterFS as DaemonSet and Heketi as a service to create Gluster volumes, which will be consumed by a Postgres database using StatefulSet. We shall have another application that adds 1 entry per second to the Postgres DB and a flask application that will allow us to view the contents of the database. We shall also move the Postgres DB from one node to another and still be able to access our data.
- A Google Cloud Platform Account with admin privileges.
- API should be enabled on GCP account.
Kubernetes Cluster Creation and Bootstrapping
- Log onto the Google Cloud Console and Open Google Cloud Shell
- Create a directory called “gluster-heketi” and cd gluster-heketi
- Clone the git repo using the following command:
git clone https://github.com/infracloudio/gluster-heketi-k8s.git .
- Edit “cluster_and_disks.sh” and change the variable PROJECT_ID to your GCP project.
- The CLUSTER_NAME, ZONE, and NODE_COUNT variables can be changed if needed.
- Execute the script “cluster_and_disks.sh”
This will create a 4-node GKE cluster, and each node will have 2 unformatted disks attached to it. This script will also generate a topology file, which is going to be used as a Kubernetes configmap. The topology contains details of the Kubernetes nodes in the cluster and their mounted disks.
Note: Following these steps will create resources which are chargeable by GCP. Ensure to follow cleanup steps mentioned below to delete all resources once done with the demo.
Setting Up Gluster, Heketi, and the Storage Class
- In Network Configuration, create a Firewall rule called “allow-heketi” and open the ports as shown below:
- Run the following command to import the configmap in Kubernetes
kubectl create -f heketi-turnkey-config.yaml
- Run the following command to start a Gluster daemon set and install the Heketi service. This creates a temporary pod that is based on a container created by Janos Lenart.
kubectl create -f heketi-turnkey.yaml
- Process logs of this pod can be viewed using the command:
kubectl logs heketi-turnkey -f
- The above YAML file will create a daemonset with the Gluster installation, which will take control of all nodes and devices mentioned in the topology file. It will also install Heketi as a pod and expose the Heketi API via a service.
- The next step is the creation of our Storage Class, through which we will allow dynamic creation of Gluster volumes by calling the Heketi service. The following steps need to be done in order to correctly set up the storage class:
- Note the node port of Heketi service as shown below:
- Open the firewall for this port in your allow-heketi Firewall Rule as shown below:
- Note the public IP of one of the Kubernetes nodes and update the REST URL key in “storage_class.yaml” with the public IP of a node and the node port from above. (In a real world implementation, you might have a Domain name for the Kubernetes cluster endpoint). For example:
- Now run “kubectl create -f storage_class.yaml” so Kubernetes can talk to Heketi in order to create volumes as needed.
- Next, run “kubectl create -f postgres-srv-ss.yaml” to create a Postgres pod in a StatefulSet. This set is backed by a Gluster volume. The configuration to achieve this is shown below: This StatefulSet is exposed by a service name called “postgres”.
- Run “kubectl get pvc” to check if the volume got created and bound to the pod as expected. The result should look like below:
- Run “kubectl create -f pgwriter-deployment.yaml”. This will create a deployment for a pod that will write 1 entry per second to the Postgres DB created above.
- Run “frontend.yaml”. This will create a deployment of a flask-based frontend, which will allow querying the Postgres DB. An external load balancer (called LoadBalancer) is also created to front the service.
- Enter the LoadBalancer IP in the browser to access the front end. Enter (in HH:MM format) the previous minute (system time) and click Submit. A list of entries with timestamps and counter values is shown.
- Run “kubectl describe pod postgres-ss-0” and note the node on which the pod is running.
- Now we will delete the StatefulSet (effectively the DB) and create it again. Run the following commands:
kubectl delete -f postgres-srv-ss.yaml
kubectl create -f postgres-srv-ss.yaml
- Run “kubectl describe pod postgres-ss-0” and observe that the pod is assigned to a new node.
- Go back to the front end and put the same HH:MM as before. The data can be observed to be intact despite host movement of the database.
- Run the following command to delete the LoadBalancer:
kubectl delete -f frontend.yaml
- Delete the GKE cluster:
- Once the GKE cluster is deleted, delete the disks:
Heketi is a simple and convenient way to create and manage Gluster volumes. It pairs very well with Kubernetes storage classes to give an on-demand volume creation capability, which can be used with other Gluster features such as replication, striping, etc to handle many complex storage use cases.
Published at DZone with permission of Harshal Shah , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.