The Container Storage Interface (CSI) is a standard for exposing arbitrary block and file storage storage systems to containerized workloads on Container Orchestration Systems (COs) like Kubernetes. Using CSI third-party storage providers can write and deploy plugins exposing storage systems in Kubernetes without ever having to touch the core Kubernetes code.
Ceph CSI plugins are one example that implement an interface between CSI enabled Container Orchestrator (CO) and CEPH cluster. It allows dynamically provisioning CEPH volumes and attaching them to workloads. Current implementation of Ceph CSI plugins was tested in Kubernetes environment (requires Kubernetes 1.13+), but the code does not rely on any Kubernetes specific calls and should be able to run with any CSI enabled CO.
Below is simple demonstration on how to enable Ceph RBD CSI drivers on a Kubernetes cluster. However before we begin lets ensure that we have the following requirements already in place:
- Kubernetes cluster v1.13+
- allow-privileged flag enabled for both kubelet and API server
- A Rook Ceph deployed cluster
Before we start lets confirm we have a Rook Ceph cluster running in our environment:
# kubectl get pods -n rook-ceph NAME READY STATUS RESTARTS AGE rook-ceph-mgr-a-5dbb44d7f8-78mmc 1/1 Running 2 18h rook-ceph-mon-a-64c8d5644-qpjtf 1/1 Running 0 46h rook-ceph-mon-b-5678cb65c7-gzcc8 1/1 Running 0 18h rook-ceph-mon-c-799f887c56-b9fxg 1/1 Running 0 78m rook-ceph-osd-0-5ff6f7bb5c-bc5rp 1/1 Running 0 46h rook-ceph-osd-1-5f7c4bb454-ngsfq 1/1 Running 0 18h rook-ceph-osd-2-7885996ffc-wnjsw 1/1 Running 0 78m rook-ceph-osd-prepare-kube-master-grrdp 0/2 Completed 0 38m rook-ceph-osd-prepare-kube-node1-vgdwl 0/2 Completed 0 38m rook-ceph-osd-prepare-kube-node2-f2gq9 0/2 Completed 0 38m
First lets clone the Ceph CSI repo and change directories into which we will work from:
# git clone https://github.com/ceph/ceph-csi.git Cloning into 'ceph-csi'... remote: Enumerating objects: 14, done. remote: Counting objects: 100% (14/14), done. remote: Compressing objects: 100% (14/14), done. remote: Total 50633 (delta 3), reused 4 (delta 0), pack-reused 50619 Receiving objects: 100% (50633/50633), 68.56 MiB | 9.67 MiB/s, done. Resolving deltas: 100% (27537/27537), done. # cd ceph-csi/deploy/rbd/kubernetes/
Next lets create the CSI attacher role:
# kubectl create -f csi-attacher-rbac.yaml serviceaccount/rbd-csi-attacher created clusterrole.rbac.authorization.k8s.io/rbd-external-attacher-runner created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-attacher-role created
Next we will create the CSI RBD attacher plugin:
# kubectl create -f csi-rbdplugin-attacher.yaml service/csi-rbdplugin-attacher created statefulset.apps/csi-rbdplugin-attacher created
Follow that up with creating the CSI RBD provisioner plugin:
# kubectl create -f csi-rbdplugin-provisioner.yaml service/csi-rbdplugin-provisioner created statefulset.apps/csi-rbdplugin-provisioner created
And finally we will create the CSI daemonset for the RBD plugin:
# kubectl create -f csi-rbdplugin.yaml daemonset.apps/csi-rbdplugin created
At this point we will need to apply a few more role based access permissions for both the provisioner and attacher:
# kubectl apply -f csi-nodeplugin-rbac.yaml serviceaccount/rbd-csi-nodeplugin created clusterrole.rbac.authorization.k8s.io/rbd-csi-nodeplugin created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-nodeplugin created # kubectl apply -f csi-provisioner-rbac.yaml serviceaccount/rbd-csi-provisioner created clusterrole.rbac.authorization.k8s.io/rbd-external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role created role.rbac.authorization.k8s.io/rbd-external-provisioner-cfg created rolebinding.rbac.authorization.k8s.io/rbd-csi-provisioner-role-cfg created
Now lets confirm are resources are up and operational:
# kubectl get po NAME READY STATUS RESTARTS AGE csi-rbdplugin-6xlml 2/2 Running 0 36s csi-rbdplugin-attacher-0 1/1 Running 2 5m56s csi-rbdplugin-n98ms 2/2 Running 0 36s csi-rbdplugin-ngrtv 2/2 Running 0 36s csi-rbdplugin-provisioner-0 3/3 Running 0 23s
If everything looks good from the previous command lets change into the examples working directory and attempt to get the storageclass working against Ceph. However we will need to gather a few details to ensure it works properly.
cd ceph-csi/examples/rbd
The storage class will require us to know the IP addresses of the Ceph Mons, which RBD pool we will use and of course a Ceph auth key. I am going to use the Ceph toolbox to get that information.
# kubectl exec -it rook-ceph-tools -n rook-ceph /bin/bash [root@rook-ceph-tools /]# ceph mon stat e3: 3 mons at {a=10.0.0.81:6790/0,b=10.0.0.82:6790/0,c=10.0.0.83:6790/0}, election epoch 26, leader 0 a, quorum 0,1,2 a,b,c [root@rook-ceph-tools /]# ceph osd lspools 1 rbd [root@rook-ceph-tools /]# ceph auth get-key client.admin|base64 QVFDTDliVmNEb21IRHhBQUxXNGhmRkczTFNtcXM0ZW5VaXlTZEE9PQ==
We can take the MON addresses and client admin key and populate that in our secret.yaml file:
--- apiVersion: v1 kind: Secret metadata: name: csi-rbd-secret namespace: default data: admin: QVFDTDliVmNEb21IRHhBQUxXNGhmRkczTFNtcXM0ZW5VaXlTZEE9PQ==
We can also add the MON addresses and pool name to the storageclass.yaml:
--- apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: csi-rbd provisioner: rbd.csi.ceph.com parameters: monitors: 10.0.0.81:6790,10.0.0.82:6790,10.0.0.83:6790 pool: rbd imageFormat: "2" imageFeatures: layering csi.storage.k8s.io/provisioner-secret-name: csi-rbd-secret csi.storage.k8s.io/provisioner-secret-namespace: default csi.storage.k8s.io/node-publish-secret-name: csi-rbd-secret csi.storage.k8s.io/node-publish-secret-namespace: default adminid: admin reclaimPolicy: Delete
Now that we have our files generated lets go ahead and issue the creation and validate:
# kubectl create -f secret.yaml secret/csi-rbd-secret created # kubectl create -f storageclass.yaml storageclass.storage.k8s.io/csi-rbd created # kubectl get storageclass NAME PROVISIONER AGE csi-rbd rbd.csi.ceph.com 11s
Now that we have completed confiring the Ceph CSI driver and the storageclass for it lets try to provision some storage and attach it to a demo pod. The first thing we need to do is create a block PVC so lets populate raw-block-pvc.yaml with the following:
--- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: raw-block-pvc spec: accessModes: - ReadWriteMany volumeMode: Block resources: requests: storage: 1Gi storageClassName: csi-rbd
Lets go ahead and create the PVC:
# kubectl create -f raw-block-pvc.yaml persistentvolumeclaim/raw-block-pvc created # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE raw-block-pvc Bound pvc-fd66b4d6-757d-11e9-8f9e-2a86e4085a59 1Gi RWX csi-rbd 3s
Now lets create an application to consume the PVC by first creating a template that references our PVC:
--- apiVersion: v1 kind: Pod metadata: name: pod-with-raw-block-volume spec: containers: - name: fc-container image: fedora:26 command: ["/bin/sh", "-c"] args: ["tail -f /dev/null"] volumeDevices: - name: data devicePath: /dev/xvda volumes: - name: data persistentVolumeClaim: claimName: raw-block-pvc
Now that we have a template we can go ahead and create the application POD and if all goes well it will be up and running:
# kubectl create -f raw-block-pod.yaml pod/pod-with-raw-block-volume created # kubectl get pod pod-with-raw-block-volume kubectl get pod fc-container NAME READY STATUS RESTARTS AGE pod-with-raw-block-volume 1/1 Running 0 1m
Hopefully this provides an example of how to get the Ceph CSI drivers up and running in Kubernetes.