Dynamically Provisioning ''Hostpath''-Based Volumes on OpenShift
Dynamically Provisioning ''Hostpath''-Based Volumes on OpenShift
Want to dynamically provision Kubernetes or OpenShift persistent volumes from a local source? A Kubernetes Incubator project has the solution.
Join the DZone community and get the full member experience.Join For Free
Storage is one of the critical pieces in a Kubernetes/OpenShift deployment for those applications that need to store persistent data; a good example is represented by “stateful” applications that are deployed using Stateful Sets (previously known as Pet Sets).
In order to do that, one or more persistent volumes are manually provisioned by the cluster admin, and the applications can use persistent volume claims to access to them (read/write). Starting from the 1.2 release (as alpha), Kubernetes offers dynamic provisioning to avoid the pre-provisioning by the cluster admin and allowing auto-provisioning of persistent volumes when they are requested by users. In the current 1.6 release, this feature is now considered in a stable state (you can read more about that here).
As described in the above link, there is a provisioner that is able to provision persistent volumes as requested by users through a specified storage class. In general, each cloud provider (Amazon Web Services, Microsoft Azure, Google Cloud Platform, …) allows us to use some default provisioners, but for a local deployment on a single node cluster (i.e. for developing purpose) there is no default provisioner for using a “hostpath” (providing a persistent volume through the host in a local directory).
There is the following project (in the “Kubernetes Incubator”), which provides a library for developing a custom external provisioner, and one of the examples is just what we're looking for: a provisioner for using a local directory on the host for persistent volumes. It's the hostpath-provisioner.
In this article, I’ll explain the steps needed to have the “hostpath provisioner” working on an OpenShift cluster and what I have learned during this journey. My intention is to provide a unique guide gathering information taken from various sources like the official repository.
First of all, I didn’t have Go on my Fedora 24, and the first thing to know is that the version 1.7 (or above) is needed because the “context” package (added in the 1.7 release) is needed. I started installing the default Go version provided by the Fedora 24 repositories (1.6.5), but I received the following error trying to build the provisioner:
vendor/k8s.io/client-go/rest/request.go:21:2: cannot find package "context" in any of: /home/ppatiern/go/src/hostpath-provisioner/vendor/context (vendor tree) /usr/lib/golang/src/context (from $GOROOT) /home/ppatiern/go/src/context (from $GOPATH)
In order to install Go 1.7 manually, after downloading the TAR file from the website, you can extract it like so:
tar -zxvf go1.7.5.linux-amd64.tar.gz -C /usr/local
After that, two main environment variables are needed for the Go compiler and runtime to work well.
- GOROOT: the directory where Go is just installed (i.e. /usr/local/go)
- GOPATH: the directory with the Go workspace (where we need to create two other directories there, the src and bin)
By modifying the .bashrc (or the .bash_profile) file, we can export such environment variables:
export GOPATH=$HOME/go PATH=$PATH:$GOPATH/bin export GOROOT=/usr/local/go PATH=$PATH:$GOROOT/bin
Having the GOPATH/bin in the PATH is needed as we’ll see in the next step.
The provisioner project we want to build has some Go dependencies, and Glide is used as the dependency manager.
It can be installed like so:
curl https://glide.sh/get | sh
This command downloads the needed files and builds the Glide binary, copying it into the GOPATH/bin directory (so we need to have that in the PATH as already done to use glide on the command line).
Building the “hostpath-provisioner”
First of all, we need to clone the GitHub repository from here and then launch the make command from the docs/demo/hostpath-provisioner directory.
The Makefile has the following steps:
- Using Glide in order to download all the needed dependencies.
- Compiling the hostpath-provisioner application.
- Building a Docker image that contains the above application.
That means that this provisioner needs to be deployed in the cluster in order to provide the dynamic provisioning feature to the other pods/containers, which need persistent volumes created dynamically.
Deploying the “hostpath-provisioner”
This provisioner is going to use a directory on the host for persistent volumes. The name of the root folder is hardcoded in the implementation: /tmp/hostpath-provisioner. Every time an application will claim to use a persistent volume, a new child directory will be created under this one.
Such root folders need to be created having access for reading and writing:
mkdir -p /tmp/hostpath-provisioner chmod 777 /tmp/hostpath-provisioner
In order to run the “hostpath-provisioner” in a cluster with RBAC (Role Based Access Control) enabled or on OpenShift, you must authorize the provisioner.
First of all, create a ServiceAccount resource:
apiVersion: v1 kind: ServiceAccount metadata: name: hostpath-provisioner
Then a ClusterRole:
kind: ClusterRole apiVersion: v1 metadata: name: hostpath-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["list", "watch", "create", "update", "patch"] - apiGroups: [""] resources: ["services", "endpoints"] verbs: ["get"]
It’s needed because the controller requires authorization to perform the above API calls (i.e. listing, watching, creating, and deleting persistent volumes and so on).
Let’s create a sample project for that, save the above resources in two different files (i.e.and ), and, finally, create these resources:
oc new-project test-provisioner oc create -f serviceaccount.yaml oc create -f openshift-clusterrole.yaml
Finally, we need to provide such authorization:
oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:test-provisioner:hostpath-provisioner oc adm policy add-cluster-role-to-user hostpath-provisioner-runner system:serviceaccount:test-provisioner:hostpath-provisioner
The “hostpath-provisioner” example provides afile, which describes the Pod deployment for having the provisioner running in the cluster. Before creating the Pod, we need to modify this file, setting the spec.serviceAccount property to just “hostpath-provisioner” (as described in the file).
kind: Pod apiVersion: v1 metadata: name: hostpath-provisioner spec: containers: - name: hostpath-provisioner image: hostpath-provisioner:latest imagePullPolicy: "IfNotPresent" env: - name: NODE_NAME valueFrom: fieldRef: fieldPath: spec.nodeName volumeMounts: - name: pv-volume mountPath: /tmp/hostpath-provisioner serviceAccount: hostpath-provisioner volumes: - name: pv-volume hostPath: path: /tmp/hostpath-provisioner
Last steps … just creating the Pod and then the StorageClass and the PersistentVolumeClaim using the providedand files.
oc create -f pod.yaml oc create -f class.yaml oc create -f claim.yaml
And now we have a “hostpath-provisioner” deployed in the cluster, ready to provision persistent volumes as requested by the other applications running in the same cluster.
See the Provisioner Working
To check that the provisioner is really working, there is afile in the project that starts a Pod claim for a persistent volume in order to create a SUCCESS file inside it.
After starting the Pod...
oc create -f test-pod.yaml
...we should see a SUCCESS file inside a child directory with a very long name inside the root /tmp/hostpath-provisioner.
ls /tmp/hostpath-provisioner/pvc-1c565a55-1935-11e7-b98c-54ee758f9350/ SUCCESS
It means that the provisioner has handled the claim request in the correct way, providing a volume to the test-pod in order to write the file.
Published at DZone with permission of Paolo Patierno , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.