This blog assumes the availability of an already installed OpenShift 4.4 environment with a 3 master and 3 worker setup. The workers will be where we run our OCS storage pods so each of those nodes should have at least one extra unused disk for the OSDs to use. Further the installation of OCS requires that we have local storage PVs already created for OCS to consume during the deployment. With that said lets get started.
First lets take a quick look at our environment by showing the nodes and looking at one of the workers to see the extra disk (vdb in this case):
$ export KUBECONFIG=/home/cloud-user/scripts/ocp/auth/kubeconfig $ oc get nodes NAME STATUS ROLES AGE VERSION master-0 Ready master 59m v1.18.2 master-1 Ready master 51m v1.18.2 master-2 Ready master 60m v1.18.2 worker-0 Ready worker 4m9s v1.18.2 worker-1 Ready worker 27m v1.18.2 worker-2 Ready worker 27m v1.18.2 $ oc debug node/worker-0 Starting pod/worker-0-debug ... To use host binaries, run `chroot /host` Pod IP: 10.20.0.200 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT vda 252:0 0 100G 0 disk |-vda1 252:1 0 384M 0 part /boot |-vda2 252:2 0 127M 0 part /boot/efi |-vda3 252:3 0 1M 0 part |-vda4 252:4 0 99.4G 0 part | `-coreos-luks-root-nocrypt 253:0 0 99.4G 0 dm /sysroot `-vda5 252:5 0 65M 0 part vdb 252:16 0 100G 0 disk
Now that we have reviewed the environment lets first label the worker nodes as storage nodes:
$ oc label nodes worker-0 cluster.ocs.openshift.io/openshift-storage='' node/worker-0 labeled $ oc label nodes worker-1 cluster.ocs.openshift.io/openshift-storage='' node/worker-1 labeled $ oc label nodes worker-2 cluster.ocs.openshift.io/openshift-storage='' node/worker-2 labeled oc get nodes -l cluster.ocs.openshift.io/openshift-storage= NAME STATUS ROLES AGE VERSION worker-0 Ready worker 5m7s v1.18.2 worker-1 Ready worker 28m v1.18.2 worker-2 Ready worker 28m v1.18.2
Next we can proceed to the console and install the Local Storage operator. First we will need to create a local-storage namespace:
Once we have created the namespace we can go to the Operators Hub and search for the LocalStorage operator:
Click on the LocalStorage operator to get the install button:
Click the install button and a page of options will be presented. Choose the version applicable to the OCP cluster version. In our case I will use 4.4. Assign the namespace as the local-storage namespace we created previously and click install:
Once the operator successfully installs the following screen will be displayed:
Now that we have the LocalStorage operator installed lets go back to the command line and create our local storage PVs that will eventually be consumed by OCS. The first step is to create the local-storage.yaml file and populate it with the following:
apiVersion: local.storage.openshift.io/v1 kind: LocalVolume metadata: name: local-block namespace: local-storage spec: nodeSelector: nodeSelectorTerms: - matchExpressions: - key: cluster.ocs.openshift.io/openshift-storage operator: In values: - "" storageClassDevices: - storageClassName: localblock volumeMode: Block devicePaths: - /dev/vdb
Save the file and then issue the following command to label our nodes with the OCS storage label:
$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}' worker-0 worker-1 worker-2
Next we will create the block PVs using the local-storage.yaml we created:
$ oc create -f local-storage.yaml localvolume.local.storage.openshift.io/local-block created
Lets validate that the pods and PVs were created:
$ oc -n local-storage get pods NAME READY STATUS RESTARTS AGE local-block-local-diskmaker-dxgwj 1/1 Running 0 42s local-block-local-diskmaker-hrsvj 1/1 Running 0 42s local-block-local-diskmaker-zsptk 1/1 Running 0 42s local-block-local-provisioner-pwgfl 1/1 Running 0 42s local-block-local-provisioner-s56f4 1/1 Running 0 42s local-block-local-provisioner-wp8bz 1/1 Running 0 42s local-storage-operator-5c46f48cfc-6ht8r 1/1 Running 0 15m $ oc get pv NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE local-pv-40d06fba 100Gi RWO Delete Available localblock 30s local-pv-8aea98b7 100Gi RWO Delete Available localblock 28s local-pv-e62c1b44 100Gi RWO Delete Available localblock 34s $ oc get sc | grep localblock localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 60s
If everything looks good we can now return to the console and install the OCS operator. As we did with the previous operator we will search for OpenShift Container and it will show the operator available to us:
Click on the operator and the install button will be displayed:
After clicking the install button the options for the operator will be displayed. Select the option based on the environment. The ones below are the default and I went with those:
After clicking install the operator will install and return to screen similar to the one below. Notice that the AWS S3 Operator was installed as well given its a dependency for the OCS operator.
With the OCS operator installed and the local storage PVs ready we can now create our OCS cluster. Lets use the cluster wizard within the console. First we need to click on the OpenShift Container Storage operator in the list of installed operators:
Next we will click on the Storage Cluster heading which will display a button to the right for creating the OCS cluster service. Click on this button:
This will bring up the following screen with the nodes labeled as OCS nodes being already checked and the amount of total raw capacity on those nodes once the local storage class is selected:
Click the create button the cluster will show a progress screen and begin to instantiate the cluster:
If we jump over to the command line we can also see pods are starting to instantiate:
$ oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE aws-s3-provisioner-85b697dd54-zcrv5 1/1 Running 0 12m csi-cephfsplugin-646jk 0/3 ContainerCreating 0 27s csi-cephfsplugin-b6c66 0/3 ContainerCreating 0 27s csi-cephfsplugin-ppqdb 3/3 Running 0 27s csi-cephfsplugin-provisioner-785b9d8bd5-45cwm 0/5 ContainerCreating 0 26s csi-cephfsplugin-provisioner-785b9d8bd5-nk9gl 0/5 ContainerCreating 0 26s csi-cephfsplugin-rfw2m 0/3 ContainerCreating 0 27s csi-cephfsplugin-vpzp8 0/3 ContainerCreating 0 27s csi-cephfsplugin-xv484 0/3 ContainerCreating 0 27s csi-rbdplugin-864jc 3/3 Running 0 27s csi-rbdplugin-crgr4 0/3 ContainerCreating 0 27s csi-rbdplugin-hwtr4 0/3 ContainerCreating 0 28s csi-rbdplugin-provisioner-66c87f8bd4-48q6q 0/5 ContainerCreating 0 27s csi-rbdplugin-provisioner-66c87f8bd4-js247 0/5 ContainerCreating 0 27s csi-rbdplugin-s9fr7 0/3 ContainerCreating 0 27s csi-rbdplugin-t7tgb 0/3 ContainerCreating 0 28s csi-rbdplugin-x25f5 0/3 ContainerCreating 0 28s noobaa-operator-6f7d47dff6-dw62k 1/1 Running 0 12m ocs-operator-6cf4c9fc95-qxllg 0/1 Running 0 12m rook-ceph-mon-a-canary-6fb7d8ff5d-bl6kq 1/1 Running 0 9s rook-ceph-mon-b-canary-864586477c-n6tjs 0/1 ContainerCreating 0 8s rook-ceph-mon-c-canary-cb7d94fd8-mvvzc 0/1 ContainerCreating 0 3s rook-ceph-operator-674cfcd899-hpvmr 1/1 Running 0 12m
Once the cluster has installed the console will display the status similar to the image below:
We can also see all the pods are up and running from the command line as well:
$ oc get pods -n openshift-storage NAME READY STATUS RESTARTS AGE aws-s3-provisioner-85b697dd54-zcrv5 1/1 Running 0 22m csi-cephfsplugin-646jk 3/3 Running 0 10m csi-cephfsplugin-b6c66 3/3 Running 0 10m csi-cephfsplugin-ppqdb 3/3 Running 0 10m csi-cephfsplugin-provisioner-785b9d8bd5-45cwm 5/5 Running 0 10m csi-cephfsplugin-provisioner-785b9d8bd5-nk9gl 5/5 Running 1 10m csi-cephfsplugin-rfw2m 3/3 Running 0 10m csi-cephfsplugin-vpzp8 3/3 Running 0 10m csi-cephfsplugin-xv484 3/3 Running 0 10m csi-rbdplugin-864jc 3/3 Running 0 10m csi-rbdplugin-crgr4 3/3 Running 0 10m csi-rbdplugin-hwtr4 3/3 Running 0 10m csi-rbdplugin-provisioner-66c87f8bd4-48q6q 5/5 Running 1 10m csi-rbdplugin-provisioner-66c87f8bd4-js247 5/5 Running 0 10m csi-rbdplugin-s9fr7 3/3 Running 0 10m csi-rbdplugin-t7tgb 3/3 Running 0 10m csi-rbdplugin-x25f5 3/3 Running 0 10m noobaa-core-0 1/1 Running 0 7m15s noobaa-db-0 1/1 Running 0 7m15s noobaa-endpoint-7cdcc9bdc6-85s2v 1/1 Running 0 5m57s noobaa-operator-6f7d47dff6-dw62k 1/1 Running 0 22m ocs-operator-6cf4c9fc95-qxllg 1/1 Running 0 22m rook-ceph-crashcollector-worker-0-7fd95579db-jt85v 1/1 Running 0 8m38s rook-ceph-crashcollector-worker-1-7f547f4dc-s9q5l 1/1 Running 0 8m7s rook-ceph-crashcollector-worker-2-bd6d78488-4pzz7 1/1 Running 0 8m27s rook-ceph-drain-canary-worker-0-64d6558fcb-x52z4 1/1 Running 0 7m16s rook-ceph-drain-canary-worker-1-7f8858f74-vsc6f 1/1 Running 0 7m16s rook-ceph-drain-canary-worker-2-5fd88c555c-2rmnx 1/1 Running 0 7m18s rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-8458f9c54l4zx 1/1 Running 0 6m59s rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-bf76744474rnz 1/1 Running 0 6m58s rook-ceph-mgr-a-5488b747df-2bmnn 1/1 Running 0 7m49s rook-ceph-mon-a-7884697548-tp5pw 1/1 Running 0 8m38s rook-ceph-mon-b-7b47c5c597-5jdrv 1/1 Running 0 8m27s rook-ceph-mon-c-69f945d575-4pmtn 1/1 Running 0 8m7s rook-ceph-operator-674cfcd899-hpvmr 1/1 Running 0 22m rook-ceph-osd-0-7f98849d5-vfhsn 1/1 Running 0 7m18s rook-ceph-osd-1-594cc64794-p45h2 1/1 Running 0 7m17s rook-ceph-osd-2-655df8c45c-wb9g6 1/1 Running 0 7m16s rook-ceph-osd-prepare-ocs-deviceset-0-0-65zj5-fzwxt 0/1 Completed 0 7m29s rook-ceph-osd-prepare-ocs-deviceset-1-0-w2hl2-z2xns 0/1 Completed 0 7m29s rook-ceph-osd-prepare-ocs-deviceset-2-0-bj7qz-zhsfs 0/1 Completed 0 7m29s rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-66b5b7dh8nzc 1/1 Running 0 6m36s
We can also see a bunch of new storage classes are also available to us:
$ oc get storageclass NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE localblock kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 32m ocs-storagecluster-ceph-rbd openshift-storage.rbd.csi.ceph.com Delete Immediate false 12m ocs-storagecluster-cephfs openshift-storage.cephfs.csi.ceph.com Delete Immediate false 12m openshift-storage.noobaa.io openshift-storage.noobaa.io/obc Delete Immediate false 8m35s
At this point OpenShift Container Storage is ready to be consumed by OpenShift Virtualization. However we still need to install OpenShift Virtualization. Again we will use the console to install the operator and do so by first searching for it:
Click on the operator to bring up the install button:
Once the install button is clicked it will present the options for the OpenShift Vritualization operator:
Once the options and install have been clicked the operator will install and once complete the following screen is displayed:
Clicking on the OpenShift Virtualization operator and then clicking on CNV Operator Deployment will bring us to the following screen:
Then click on Create HyperConverged Cluster button, accept the defaults and click on create:
$ oc get pods -n openshift-cnv openshift-cnv bridge-marker-4q57k 0/1 ContainerCreating 0 12s openshift-cnv bridge-marker-66vzn 1/1 Running 0 11s openshift-cnv bridge-marker-m6ll8 1/1 Running 0 11s openshift-cnv bridge-marker-slch2 0/1 ContainerCreating 0 12s openshift-cnv bridge-marker-xthw9 1/1 Running 0 12s openshift-cnv bridge-marker-zgvxq 1/1 Running 0 11s openshift-cnv cdi-apiserver-c6bfbc6b9-rjnl4 0/1 Running 0 13s openshift-cnv cdi-deployment-7c4fd9cb5b-rr8fk 0/1 ContainerCreating 0 13s openshift-cnv cdi-operator-7467d4478-9xcrm 1/1 Running 0 16m openshift-cnv cdi-uploadproxy-54c7876d47-wpzvf 0/1 ContainerCreating 0 13s openshift-cnv cluster-network-addons-operator-687c79cc7b-94fdm 1/1 Running 0 16m openshift-cnv hco-operator-7cd98b5686-jmb8t 0/1 Running 0 16m openshift-cnv hostpath-provisioner-operator-5d8f6bc547-4f5rd 1/1 Running 0 16m openshift-cnv kube-cni-linux-bridge-plugin-cd4zb 0/1 ContainerCreating 0 15s openshift-cnv kube-cni-linux-bridge-plugin-cgpkz 0/1 ContainerCreating 0 15s openshift-cnv kube-cni-linux-bridge-plugin-cxzvv 1/1 Running 0 15s openshift-cnv kube-cni-linux-bridge-plugin-d9qnn 1/1 Running 0 15s openshift-cnv kube-cni-linux-bridge-plugin-f9k7g 1/1 Running 0 15s openshift-cnv kube-cni-linux-bridge-plugin-gj6th 1/1 Running 0 15s openshift-cnv kubevirt-a2582644e49387db02a9524312c0792c76605518-jobczrpsrrgjc 0/1 Completed 0 16s openshift-cnv kubevirt-ssp-operator-69fbcd58f4-kpqr9 1/1 Running 0 16m openshift-cnv nmstate-handler-kwt7d 0/1 ContainerCreating 0 8s openshift-cnv nmstate-handler-llj8z 0/1 ContainerCreating 0 8s openshift-cnv nmstate-handler-vct84 0/1 ContainerCreating 0 8s openshift-cnv nmstate-handler-worker-5rr47 0/1 ContainerCreating 0 8s openshift-cnv nmstate-handler-worker-jg4v4 1/1 Running 0 8s openshift-cnv nmstate-handler-worker-wjnz2 0/1 ContainerCreating 0 8s openshift-cnv node-maintenance-operator-6c76fbb8d-qllp6 1/1 Running 0 16m openshift-cnv ovs-cni-amd64-2rgn7 0/2 ContainerCreating 0 5s openshift-cnv ovs-cni-amd64-8rbb6 0/2 ContainerCreating 0 5s openshift-cnv ovs-cni-amd64-hc5b8 0/2 ContainerCreating 0 5s openshift-cnv ovs-cni-amd64-hkdjb 0/2 ContainerCreating 0 5s openshift-cnv ovs-cni-amd64-hw995 0/2 ContainerCreating 0 5s openshift-cnv ovs-cni-amd64-jqg8x 0/2 ContainerCreating 0 5s openshift-cnv virt-operator-6d766f7698-bmsdc 1/1 Running 0 16m openshift-cnv virt-operator-6d766f7698-m2fxl 1/1 Running 0 16m
Looking back at the console once all the pods are running we will be presented with the following on the screen:
And if we look back at the command line we can see all the pods running under the opensihft-cnv namespace:
$ oc get pods -n openshift-cnv NAME READY STATUS RESTARTS AGE bridge-marker-4q57k 1/1 Running 0 11m bridge-marker-66vzn 1/1 Running 0 11m bridge-marker-m6ll8 1/1 Running 0 11m bridge-marker-slch2 1/1 Running 0 11m bridge-marker-xthw9 1/1 Running 0 11m bridge-marker-zgvxq 1/1 Running 0 11m cdi-apiserver-c6bfbc6b9-rjnl4 1/1 Running 0 11m cdi-deployment-7c4fd9cb5b-rr8fk 1/1 Running 0 11m cdi-operator-7467d4478-9xcrm 1/1 Running 0 28m cdi-uploadproxy-54c7876d47-wpzvf 1/1 Running 0 11m cluster-network-addons-operator-687c79cc7b-94fdm 1/1 Running 0 28m hco-operator-7cd98b5686-jmb8t 1/1 Running 0 28m hostpath-provisioner-operator-5d8f6bc547-4f5rd 1/1 Running 0 28m kube-cni-linux-bridge-plugin-cd4zb 1/1 Running 0 11m kube-cni-linux-bridge-plugin-cgpkz 1/1 Running 0 11m kube-cni-linux-bridge-plugin-cxzvv 1/1 Running 0 11m kube-cni-linux-bridge-plugin-d9qnn 1/1 Running 0 11m kube-cni-linux-bridge-plugin-f9k7g 1/1 Running 0 11m kube-cni-linux-bridge-plugin-gj6th 1/1 Running 0 11m kubevirt-node-labeller-9947m 1/1 Running 0 11m kubevirt-node-labeller-ckqbn 1/1 Running 0 11m kubevirt-node-labeller-hn9xt 1/1 Running 0 11m kubevirt-node-labeller-kw5hb 1/1 Running 0 11m kubevirt-node-labeller-n4lxc 1/1 Running 0 11m kubevirt-node-labeller-vhmkk 1/1 Running 0 11m kubevirt-ssp-operator-69fbcd58f4-kpqr9 1/1 Running 0 28m nmstate-handler-kwt7d 1/1 Running 0 11m nmstate-handler-llj8z 1/1 Running 0 11m nmstate-handler-vct84 1/1 Running 0 11m nmstate-handler-worker-5rr47 1/1 Running 0 11m nmstate-handler-worker-jg4v4 1/1 Running 0 11m nmstate-handler-worker-wjnz2 1/1 Running 0 11m node-maintenance-operator-6c76fbb8d-qllp6 1/1 Running 0 28m ovs-cni-amd64-2rgn7 2/2 Running 0 11m ovs-cni-amd64-8rbb6 2/2 Running 0 11m ovs-cni-amd64-hc5b8 2/2 Running 0 11m ovs-cni-amd64-hkdjb 2/2 Running 0 11m ovs-cni-amd64-hw995 2/2 Running 0 11m ovs-cni-amd64-jqg8x 2/2 Running 0 11m virt-api-84fd45c455-2jjbb 1/1 Running 0 11m virt-api-84fd45c455-ct7dt 1/1 Running 0 11m virt-controller-74c54b549-qz4q9 1/1 Running 0 11m virt-controller-74c54b549-zt5qj 1/1 Running 0 11m virt-handler-69644 1/1 Running 0 11m virt-handler-7857r 1/1 Running 0 11m virt-handler-bpsjg 1/1 Running 0 11m virt-handler-j48c7 1/1 Running 0 11m virt-handler-rrl6d 1/1 Running 0 11m virt-handler-vr9bf 1/1 Running 0 11m virt-operator-6d766f7698-bmsdc 1/1 Running 0 28m virt-operator-6d766f7698-m2fxl 1/1 Running 0 28m virt-template-validator-bd96cdfcd-4qgcz 1/1 Running 0 11m virt-template-validator-bd96cdfcd-6d27x 1/1 Running 0 11m
Finally at this point we can actually go and launch a virtual machine. If we navigate to workloads in the console we can see there is a Virtualization option:
It is here where we can click on the create a virtual machine button to start the wizard. In the wizard the first screen is where we set VM specific attributes: name, OS source, OS type, VM size etc. Below is an example of my VM:
Clicking next we are presented with a networking screen. Again options for adding various network connectivity are available here but in this example I am just going to accept the defaults:
Next we are brought to storage and by default there is no storageclass assigned to the disk. If a default storageclass was set on the OCP cluster then it would most likely default to that but I did not have that configured. Given that I needed to edit the disk using the three dots on the right and set the storageclass:
On the advanced screen we can set the hostname, an SSH key if we desire and also attach a virtual CD-ROM:
Finally we get to a summary screen which shows all the settings as they were chosen through the wizard. Once reviewed we can click next and create the VM:
Once the VM is creating we can view the status of the VM on the detail page:
Waiting a little bit we can now see the VM is running:
If we jump over to the console we can see we have a login prompt:
Hopefully this gives everyone a tastes of how easy it is to configure OpenShift Container Storage and OpenShift Virtualization in a OpenShift Container Platform 4.4 environment.