Dynamic NFS Provisioning in Red Hat OpenShift
This article explains how to set up an NFS client provisioner in Red Hat OpenShift Container Platform by setting up the NFS server in Red Hat Enterprise Linux.
Join the DZone community and get the full member experience.Join For Free
When deploying Kubernetes, one of the very common requirement is to have persistent storage. For stateful applications such as databases, persistent storage is a “Must Have” requirement. The solution is mounting the external volumes inside the containers. In public cloud deployments, Kubernetes has integrations with the cloud providers’ block-storage backends, allowing developers to create claims for volumes to use with their deployments. Kubernetes also works with the cloud provider to create a volume and mount it inside the developers’ pods. There are several options available in Kubernetes to replicate the same behavior on-premise. However, one of the simplest and easiest ways is to set up the NFS server in a Linux machine and provide the back-end storage to the NFS client provisioner within the Kubernetes cluster.
Note: This setup does not address full secure configuration and does not provide high availability for persistent volume. Therefore, it must not be adopted for a production environment.
In the tutorial below, I’ll explain how to set up an NFS client provisioner in the Red Hat OpenShift Container Platform by setting up the NFS server in Red Hat Enterprise Linux.
First, let's install the NFS server on the host machine, and create a directory where our NFS server will serve the following files:
Export the directory created earlier.
Now, the Service account must be set using a YAML.file in the OpenShift environment; it will create the role, role binding, and various roles within the Kubernetes cluster as shown below.
Deploy the service account by running the command below.
Create a storage class NFS using the NFS.YAML file below.
Now, create the storage class from the NFS.YAML file.
Now, create a POD for the NFS client provisioner using the below YAML file.
Deploy the NFS client POD.
You can verify the POD in running state either via CLI or in the GUI as shown below.
Run the following command to verify if the POD has been created with proper configuration.
Now, test our setup by provisioning an Nginx container by requesting a persistent volume claim and mounting it in the container.
Create a persistent volume claim using the following YAML file.
Apply the YAML file.
Verify it in the GUI or by running the following.
We can verify this persistent volume in the server where the NFS server is configured as shown below.
Now, create an Nginx POD, and specify the claim name “NFS-PVC-test” in this case in the YAML file as shown below.
Create the POD.
Now, create a text file inside the pod and verify that it exists inside the /NFS-share folder inside the NFS server.
Verify it in the NFS server.
As you can see, the file is replicated in the NFS server. Thanks for reading! Post any comments in the comments section below.
Opinions expressed by DZone contributors are their own.