DZone
Thanks for visiting DZone today,
Edit Profile
  • Manage Email Subscriptions
  • How to Post to DZone
  • Article Submission Guidelines
Sign Out View Profile
  • Post an Article
  • Manage My Drafts
Over 2 million developers have joined DZone.
Log In / Join
Please enter at least three characters to search
Refcards Trend Reports
Events Video Library
Refcards
Trend Reports

Events

View Events Video Library

Zones

Culture and Methodologies Agile Career Development Methodologies Team Management
Data Engineering AI/ML Big Data Data Databases IoT
Software Design and Architecture Cloud Architecture Containers Integration Microservices Performance Security
Coding Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks
Culture and Methodologies
Agile Career Development Methodologies Team Management
Data Engineering
AI/ML Big Data Data Databases IoT
Software Design and Architecture
Cloud Architecture Containers Integration Microservices Performance Security
Coding
Frameworks Java JavaScript Languages Tools
Testing, Deployment, and Maintenance
Deployment DevOps and CI/CD Maintenance Monitoring and Observability Testing, Tools, and Frameworks

Last call! Secure your stack and shape the future! Help dev teams across the globe navigate their software supply chain security challenges.

Modernize your data layer. Learn how to design cloud-native database architectures to meet the evolving demands of AI and GenAI workloads.

Releasing software shouldn't be stressful or risky. Learn how to leverage progressive delivery techniques to ensure safer deployments.

Avoid machine learning mistakes and boost model performance! Discover key ML patterns, anti-patterns, data strategies, and more.

Related

  • Deploy Application on Open-Shift that Requires Elevated Privileges on Specific Paths
  • Setting Up a CrateDB Cluster With Kubernetes to Store and Query Machine Data
  • Java's Quiet Revolution: Thriving in the Serverless Kubernetes Era
  • The Production-Ready Kubernetes Service Checklist

Trending

  • From Zero to Production: Best Practices for Scaling LLMs in the Enterprise
  • AI’s Role in Everyday Development
  • Performance Optimization Techniques for Snowflake on AWS
  • Emerging Data Architectures: The Future of Data Management
  1. DZone
  2. Testing, Deployment, and Maintenance
  3. Deployment
  4. Dynamic NFS Provisioning in Red Hat OpenShift

Dynamic NFS Provisioning in Red Hat OpenShift

This article explains how to set up an NFS client provisioner in Red Hat OpenShift Container Platform by setting up the NFS server in Red Hat Enterprise Linux.

By 
Muhammad Afzal user avatar
Muhammad Afzal
·
Dec. 26, 20 · Tutorial
Likes (4)
Comment
Save
Tweet
Share
12.6K Views

Join the DZone community and get the full member experience.

Join For Free


Kubernetes infographic.

When deploying Kubernetes, one of the very common requirement is to have persistent storage. For stateful applications such as databases, persistent storage is a “Must Have” requirement. The solution is mounting the external volumes inside the containers. In public cloud deployments, Kubernetes has integrations with the cloud providers’ block-storage backends, allowing developers to create claims for volumes to use with their deployments. Kubernetes also works with the cloud provider to create a volume and mount it inside the developers’ pods. There are several options available in Kubernetes to replicate the same behavior on-premise. However, one of the simplest and easiest ways is to set up the NFS server in a Linux machine and provide the back-end storage to the NFS client provisioner within the Kubernetes cluster.

Note: This setup does not address full secure configuration and does not provide high availability for persistent volume. Therefore, it must not be adopted for a production environment.

In the tutorial below, I’ll explain how to set up an NFS client provisioner in the Red Hat OpenShift Container Platform by setting up the NFS server in Red Hat Enterprise Linux.

First, let's install the NFS server on the host machine, and create a directory where our NFS server will serve the following files:

Java
 




x


 
1
# yum install -y nfs-utils# systemctl enable rpcbind
2
# systemctl enable nfs-server
3
# systemctl start rpcbind
4
# systemctl start nfs-server



Installation of the NFS server on the host machine.
Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# mkdir -p /nfs-share
2
[root@bastion ~]# /bin/mount  -t xfs  -o inode64,noatime /dev/sdb  /nfs-share
3
# [root@bastion ~]# df -h /nfs-share
4
Filesystem      Size  Used Avail Use% Mounted on
5
/dev/sdb        1.7T  104M  1.7T   1% /nfs-share
6
[root@bastion ~]#chmod -R 777 /nfs-share / # for making matter simple to troubleshoot, but not recommended for production setup



Export the directory created earlier.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# cat /etc/exports
2
/nfs-share  *(rw,sync,no_subtree_check,no_root_squash,insecure)[root@bastion ~]# sudo exportfs -rv
3
exporting *:/nfs-share[root@bastion ~]# showmount -e
4
Export list for bastion.ocp4.sjc02.lab.cisco.com:
5
/nfs-share *
6
[root@bastion ~]#



Now, the Service account must be set using a YAML.file in the OpenShift environment; it will create the role, role binding, and various roles within the Kubernetes cluster as shown below.

Java
 




xxxxxxxxxx
1
60


 
1
[root@bastion ~]# cat rbac.yaml
2
kind: ServiceAccount
3
apiVersion: v1
4
metadata:
5
  name: nfs-pod-provisioner-sa
6
---
7
kind: ClusterRole # Role of kubernetes
8
apiVersion: rbac.authorization.k8s.io/v1 # auth API
9
metadata:
10
  name: nfs-provisioner-clusterRole
11
rules:
12
  - apiGroups: [""] # rules on persistentvolumes
13
    resources: ["persistentvolumes"]
14
    verbs: ["get", "list", "watch", "create", "delete"]
15
  - apiGroups: [""]
16
    resources: ["persistentvolumeclaims"]
17
    verbs: ["get", "list", "watch", "update"]
18
  - apiGroups: ["storage.k8s.io"]
19
    resources: ["storageclasses"]
20
    verbs: ["get", "list", "watch"]
21
  - apiGroups: [""]
22
    resources: ["events"]
23
    verbs: ["create", "update", "patch"]
24
---
25
kind: ClusterRoleBinding
26
apiVersion: rbac.authorization.k8s.io/v1
27
metadata:
28
  name: nfs-provisioner-rolebinding
29
subjects:
30
  - kind: ServiceAccount
31
    name: nfs-pod-provisioner-sa # defined on top of file
32
    namespace: default
33
roleRef: # binding cluster role to service account
34
  kind: ClusterRole
35
  name: nfs-provisioner-clusterRole # name defined in clusterRole
36
  apiGroup: rbac.authorization.k8s.io
37
---
38
kind: Role
39
apiVersion: rbac.authorization.k8s.io/v1
40
metadata:
41
  name: nfs-pod-provisioner-otherRoles
42
rules:
43
  - apiGroups: [""]
44
    resources: ["endpoints"]
45
    verbs: ["get", "list", "watch", "create", "update", "patch"]
46
---
47
kind: RoleBinding
48
apiVersion: rbac.authorization.k8s.io/v1
49
metadata:
50
  name: nfs-pod-provisioner-otherRoles
51
subjects:
52
  - kind: ServiceAccount
53
    name: nfs-pod-provisioner-sa # same as top of the file
54
    # replace with namespace where provisioner is deployed
55
    namespace: default
56
roleRef:
57
  kind: Role
58
  name: nfs-pod-provisioner-otherRoles
59
  apiGroup: rbac.authorization.k8s.io
60
[root@bastion ~]#



Deploy the service account by running the command below.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc apply -f rbac.yaml
2
serviceaccount/nfs-pod-provisioner-sa created
3
clusterrole.rbac.authorization.k8s.io/nfs-provisioner-clusterRole created
4
clusterrolebinding.rbac.authorization.k8s.io/nfs-provisioner-rolebinding created
5
role.rbac.authorization.k8s.io/nfs-pod-provisioner-otherRoles created
6
rolebinding.rbac.authorization.k8s.io/nfs-pod-provisioner-otherRoles created
7
[root@bastion ~]# oc get clusterrole,role



Create a storage class NFS using the NFS.YAML file below.

Java
 




xxxxxxxxxx
1


 
1
apiVersion: storage.k8s.io/v1
2
kind: StorageClass
3
metadata:
4
  name: nfs # when creating the PVC, it should mention this name
5
provisioner: nfs-test # give any name of your choice
6
parameters:
7
  archiveOnDelete: "false"
8
[root@bastion ~]#



Now, create the storage class from the NFS.YAML file.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc create -f nfs.yaml
2
you can verify it by running the following command or in OpenShift console
3
[root@bastion ~]# oc get storageclass | grep nfs
4
nfs                              nfs-test                        5h30m
5
[root@bastion ~]#



Red Hat Storage Classes.

Now, create a POD for the NFS client provisioner using the below YAML file.

Java
 




xxxxxxxxxx
1
34


 
1
[root@bastion ~]# cat nfs_pod_provisioner.yaml
2
kind: Deployment
3
apiVersion: extensions/v1beta1
4
metadata:
5
  name: nfs-pod-provisioner
6
spec:
7
  replicas: 1
8
  strategy:
9
    type: Recreate
10
  template:
11
    metadata:
12
      labels:
13
        app: nfs-pod-provisioner
14
    spec:
15
      serviceAccountName: nfs-pod-provisioner-sa # name of service account created in rbac.yaml
16
      containers:
17
        - name: nfs-pod-provisioner
18
          image: quay.io/external_storage/nfs-client-provisioner:latest
19
          volumeMounts:
20
            - name: nfs-provisioner-v
21
              mountPath: /persistentvolumes
22
          env:
23
            - name: PROVISIONER_NAME # do not change
24
              value: nfs-test # SAME AS PROVISONER NAME VALUE IN STORAGECLASS
25
            - name: NFS_SERVER # do not change
26
              value: 10.16.1.150 # Ip of the NFS SERVER
27
            - name: NFS_PATH # do not change
28
              value: /nfs-share # path to nfs directory setup
29
      volumes:
30
       - name: nfs-provisioner-v # same as volumemouts name
31
         nfs:
32
           server: 10.16.1.150
33
           path: /nfs-share
34

            



Deploy the NFS client POD.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc create -f nfs_pod_provisioner.yaml



You can verify the POD in running state either via CLI or in the GUI as shown below.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc get pods
2
NAME                                   READY   STATUS    RESTARTS   AGE
3
nfs-pod-provisioner-8458c4b4f6-r4cf4   1/1     Running   0          5h27m
4
[root@bastion ~]#
5
[root@bastion ~]#



Verification of POD in running state.

Run the following command to verify if the POD has been created with proper configuration.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc describe pod nfs-pod-provisioner-8458c4b4f6-r4cf4



Verification that POD has been created correctly.

Now, test our setup by provisioning an Nginx container by requesting a persistent volume claim and mounting it in the container.

Create a persistent volume claim using the following YAML file.

Java
 




xxxxxxxxxx
1
13


 
1
[root@bastion ~]# cat nfs_pvc_dynamic.yaml
2
apiVersion: v1
3
kind: PersistentVolumeClaim
4
metadata:
5
  name: nfs-pvc-test
6
spec:
7
  storageClassName: nfs # SAME NAME AS THE STORAGECLASS
8
  accessModes:
9
    - ReadWriteMany #  must be the same as PersistentVolume
10
  resources:
11
    requests:
12
      storage: 50Mi
13
[root@bastion ~]#



Apply the YAML file.

Java
 




xxxxxxxxxx
1


 
1
oc apply -f  nfs_pvc_dynamic.yaml



Verify it in the GUI or by running the following.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc get pvc
2
NAME           STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
3
nfs-pvc-test   Bound    pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a   50Mi       RWX            nfs            5h32m
4
[root@bastion ~]#



Persistent Volume Claims.

We can verify this persistent volume in the server where the NFS server is configured as shown below.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# ls /nfs-share
2
default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a



Now, create an Nginx POD, and specify the claim name “NFS-PVC-test” in this case in the YAML file as shown below.

Java
 




xxxxxxxxxx
1
28


 
1
[root@bastion ~]# cat ngnix_nfs.yaml
2
apiVersion: extensions/v1beta1
3
kind: Deployment
4
metadata:
5
  labels:
6
    app: nginx
7
  name: nfs-nginx
8
spec:
9
  replicas: 1
10
  selector:
11
    matchLabels:
12
      app: nginx
13
  template:
14
    metadata:
15
      labels:
16
        app: nginx
17
    spec:
18
      volumes:
19
      - name: nfs-test #
20
        persistentVolumeClaim:
21
          claimName: nfs-pvc-test  # same name of pvc that was created
22
      containers:
23
      - image: nginx
24
        name: nginx
25
        volumeMounts:
26
        - name: nfs-test # name of volume should match claimName volume
27
          mountPath: mydata # mount inside of contianer
28
[root@bastion ~]#



Create the POD.

Java
 




xxxxxxxxxx
1


 
1
# oc apply -f nginx_nfs.yaml
2
[root@bastion ~]# oc get pods
3
NAME                                   READY   STATUS    RESTARTS   AGE
4
nfs-nginx-6f8d4f7786-9gwks             1/1     Running   0          5h32m
5
nfs-pod-provisioner-8458c4b4f6-r4cf4   1/1     Running   0          5h43m
6
[root@bastion ~]#



Creating the POD screen.

Now, create a text file inside the pod and verify that it exists inside the /NFS-share folder inside the NFS server.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# oc exec -it nfs-nginx-6f8d4f7786-9gwks bash
2
root@nfs-nginx-6f8d4f7786-9gwks:/# cd mydata
3
root@nfs-nginx-6f8d4f7786-9gwks:/mydata2# date >> demofile.txt



Verify it in the NFS server.

Java
 




xxxxxxxxxx
1


 
1
[root@bastion ~]# ls /nfs-share/default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a/
2
demofile.txt  
3

            
4
[root@bastion ~]# cat /nfs-share/default-nfs-pvc-test-pvc-6c9c4677-f355-4abe-ace4-81a8546f0d6a/demofile.txt
5
Sun Dec 13 04:25:45 UTC 2020
6
[root@bastion ~]#



As you can see, the file is replicated in the NFS server. Thanks for reading! Post any comments in the comments section below.

Network File System Kubernetes Java (programming language) OpenShift pods

Opinions expressed by DZone contributors are their own.

Related

  • Deploy Application on Open-Shift that Requires Elevated Privileges on Specific Paths
  • Setting Up a CrateDB Cluster With Kubernetes to Store and Query Machine Data
  • Java's Quiet Revolution: Thriving in the Serverless Kubernetes Era
  • The Production-Ready Kubernetes Service Checklist

Partner Resources

×

Comments
Oops! Something Went Wrong

The likes didn't load as expected. Please refresh the page and try again.

ABOUT US

  • About DZone
  • Support and feedback
  • Community research
  • Sitemap

ADVERTISE

  • Advertise with DZone

CONTRIBUTE ON DZONE

  • Article Submission Guidelines
  • Become a Contributor
  • Core Program
  • Visit the Writers' Zone

LEGAL

  • Terms of Service
  • Privacy Policy

CONTACT US

  • 3343 Perimeter Hill Drive
  • Suite 100
  • Nashville, TN 37211
  • support@dzone.com

Let's be friends:

Likes
There are no likes...yet! 👀
Be the first to like this post!
It looks like you're not logged in.
Sign in to see who liked this post!