How to run NFS on top of OpenEBS Jiva

Utkarsh Mani Tripathi
4 min readOct 8, 2019

OpenEBS doesn’t support RWM out of the box but it can be used for RWM requirements with the help of NFS provisioner. NFS provisioner is just like an application which consumes OpenEBS volumes as it’s persistent volume and provides a way to share the volume between many applications.

In this blog, i would like to discuss the steps to run NFS on top of OpenEBS Jiva for RWM use cases. There are three sections in the blog, one is about the installation of Jiva volumes, second one is about the installation of NFS Provisioner and the last one is about provisioning of the shared (RWM) PV for busybox (application). Explanation and the details about NFS provisioner is out of the scope of this blog.

Installation of Jiva Volumes

Installing and setting up OpenEBS Jiva volumes are very easy, you need to first install the control plane components required for provisioning of Jiva volumes. After installation of the OpenEBS control plane components, you need to create storageclass and PVC that has to be mounted into NFS Provisioner deployment.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: openebs-nfs
namespace: nfs
spec:
storageClassName: openebs-jiva-default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: "10G"

Above YAML spec for PVC uses openebs-jiva-default storageclass which is created by control plane, please refer to this documentation for more details and customisation.

Installation of NFS Provisioner

There are three steps to install NFS provisioner which are given below, please refer to NFS Provisioner docs.

  • Installation of NFS requires Pod security policy (PSP) to provide specific privileges to NFS provisioner.
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
name: nfs-provisioner
namespace: nfs
spec:
fsGroup:
rule: RunAsAny
allowedCapabilities:
- DAC_READ_SEARCH
- SYS_RESOURCE
runAsUser:
rule: RunAsAny
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- configMap
- downwardAPI
- emptyDir
- persistentVolumeClaim
- secret
- hostPath
  • Setting up clusterroles and clusterrole binding for NFS Provisioner to provide access to the various Kubernetes specific API’s for volume provisioning.
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-provisioner
namespace: nfs
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-provisioner-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
- apiGroups: [""]
resources: ["services", "endpoints"]
verbs: ["get"]
- apiGroups: ["extensions"]
resources: ["podsecuritypolicies"]
resourceNames: ["nfs-provisioner"]
verbs: ["use"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-provisioner
namespace: nfs
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: ClusterRole
name: nfs-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-provisioner
subjects:
- kind: ServiceAccount
name: nfs-provisioner
# replace with namespace where provisioner is deployed
namespace: nfs
roleRef:
kind: Role
name: leader-locking-nfs-provisioner
apiGroup: rbac.authorization.k8s.io
  • Add the pvc details that is created while installation of Jiva components into deployment of NFS provisioner.
kind: Service
apiVersion: v1
metadata:
name: nfs-provisioner
namespace: nfs
labels:
app: nfs-provisioner
spec:
ports:
- name: nfs
port: 2049
- name: mountd
port: 20048
- name: rpcbind
port: 111
- name: rpcbind-udp
port: 111
protocol: UDP
selector:
app: nfs-provisioner
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-provisioner
namespace: nfs
spec:
selector:
matchLabels:
app: nfs-provisioner
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: nfs-provisioner
spec:
serviceAccount: nfs-provisioner
containers:
- name: nfs-provisioner
image: quay.io/kubernetes_incubator/nfs-provisioner:latest
ports:
- name: nfs
containerPort: 2049
- name: mountd
containerPort: 20048
- name: rpcbind
containerPort: 111
- name: rpcbind-udp
containerPort: 111
protocol: UDP
securityContext:
capabilities:
add:
- DAC_READ_SEARCH
- SYS_RESOURCE
args:
- "-provisioner=openebs.io/nfs"
env:
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_NAME
value: nfs-provisioner
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: export-volume
mountPath: /export
volumes:
- name: export-volume
persistentVolumeClaim:
claimName: openebs-nfs

Installation of application

Deploying an application requires a storageclass pointing to the provisioner openebs.io/nfs for dynamic provisioning of the RWM volumes provisioned by NFS Provisioner.

  • Create storageclass and PVC for applications to point to openebs-nfs-provisioner for creating PVC with RWM access mode.
apiVersion: storage.k8s.io/v1 
kind: StorageClass
metadata:
name: openebs-nfs
provisioner: openebs.io/nfs
mountOptions:
- vers=4.1
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: nfs
annotations:
volume.beta.kubernetes.io/storage-class: "openebs-nfs"
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Mi
  • Deploy two busybox pod for reading and writing the data
kind: Pod
apiVersion: v1
metadata:
name: write-pod
spec:
containers:
- name: write-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "touch /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs
---
kind: Pod
apiVersion: v1
metadata:
name: read-pod
spec:
containers:
- name: read-pod
image: gcr.io/google_containers/busybox:1.24
command:
- "/bin/sh"
args:
- "-c"
- "test -f /mnt/SUCCESS && exit 0 || exit 1"
volumeMounts:
- name: nfs-pvc
mountPath: "/mnt"
restartPolicy: "Never"
volumes:
- name: nfs-pvc
persistentVolumeClaim:
claimName: nfs

As you can see both of the above busybox pods use same claim-name nfs for accessing the volume in RWM access mode. write-pod creates a file /mnt/SUCCESS whereas read-pod check whether file /mnt/SUCCESS exists. After the successful deployment of the busybox pods, both the pods get into completed state.

Note: Though Jiva volumes can be configured for HA, but since NFS Provisioner is single point of failure, applications may failed to get scheduled in case if NFS Provisioner goes down. So availability of PVC depends on the availability of NFS Provisioner pod, even though Jiva replica and controller pods are healthy.

OpenEBS Jiva volumes can be used for RWM use cases with the help of NFS Provisioner. You can deploy any stateful workload that require RWM access mode using the above approach. Feel free to comment on this blog or reach out to us on our Slack channel.

That’s all folks! 😊

~ उत्कर्ष_उवाच

Ref:

--

--