Wednesday, May 20, 2020

Installing OpenShift Container Storage and OpenShift Virtualization



There seem to be some mystery as to what might be inside the container when it comes to installing and configuring OpenShift Containerized Storage and OpenShift Virtualization (formerly CNV) but this blog will easily prove there is no need to fear the container.

This blog assumes the availability of an already installed OpenShift 4.4 environment with a 3 master and 3 worker setup.   The workers will be where we run our OCS storage pods so each of those nodes should have at least one extra unused disk for the OSDs to use.  Further the installation of OCS requires that we have local storage PVs already created for OCS to consume during the deployment.  With that said lets get started.

First lets take a quick look at our environment by showing the nodes and looking at one of the workers to see the extra disk (vdb in this case):

$ export KUBECONFIG=/home/cloud-user/scripts/ocp/auth/kubeconfig
$ oc get nodes
NAME       STATUS   ROLES    AGE    VERSION
master-0   Ready    master   59m    v1.18.2
master-1   Ready    master   51m    v1.18.2
master-2   Ready    master   60m    v1.18.2
worker-0   Ready    worker   4m9s   v1.18.2
worker-1   Ready    worker   27m    v1.18.2
worker-2   Ready    worker   27m    v1.18.2

$ oc debug node/worker-0
Starting pod/worker-0-debug ...
To use host binaries, run `chroot /host`
Pod IP: 10.20.0.200
If you don't see a command prompt, try pressing enter.

sh-4.2# chroot /host
sh-4.4# lsblk
NAME                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
vda                          252:0    0  100G  0 disk 
|-vda1                       252:1    0  384M  0 part /boot
|-vda2                       252:2    0  127M  0 part /boot/efi
|-vda3                       252:3    0    1M  0 part 
|-vda4                       252:4    0 99.4G  0 part 
| `-coreos-luks-root-nocrypt 253:0    0 99.4G  0 dm   /sysroot
`-vda5                       252:5    0   65M  0 part 
vdb                          252:16   0  100G  0 disk 

Now that we have reviewed the environment lets first label the worker nodes as storage nodes:

$ oc label nodes worker-0 cluster.ocs.openshift.io/openshift-storage=''
node/worker-0 labeled

$ oc label nodes worker-1 cluster.ocs.openshift.io/openshift-storage=''
node/worker-1 labeled

$ oc label nodes worker-2 cluster.ocs.openshift.io/openshift-storage=''
node/worker-2 labeled

oc get nodes -l cluster.ocs.openshift.io/openshift-storage=
NAME       STATUS   ROLES    AGE    VERSION
worker-0   Ready    worker   5m7s   v1.18.2
worker-1   Ready    worker   28m    v1.18.2
worker-2   Ready    worker   28m    v1.18.2

Next we can proceed to the console and install the Local Storage operator.  First we will need to create a local-storage namespace:



Once we have created the namespace we can go to the Operators Hub and search for the LocalStorage operator:


Click on the LocalStorage operator to get the install button:


Click the install button and a page of options will be presented.  Choose the version applicable to the OCP cluster version.  In our case I will use 4.4.  Assign the namespace as the local-storage namespace we created previously and click install:


Once the operator successfully installs the following screen will be displayed:


Now that we have the LocalStorage operator installed lets go back to the command line and create our local storage PVs that will eventually be consumed by OCS.   The first step is to create the local-storage.yaml file and populate it with the following:

apiVersion: local.storage.openshift.io/v1
kind: LocalVolume
metadata:
  name: local-block
  namespace: local-storage
spec:
  nodeSelector:
    nodeSelectorTerms:
    - matchExpressions:
        - key: cluster.ocs.openshift.io/openshift-storage
          operator: In
          values:
          - ""
  storageClassDevices:
    - storageClassName: localblock
      volumeMode: Block
      devicePaths:
        - /dev/vdb
 

Save the file and then issue the following command to label our nodes with the OCS storage label:

$ oc get nodes -l cluster.ocs.openshift.io/openshift-storage -o jsonpath='{range .items[*]}{.metadata.name}{"\n"}'
worker-0
worker-1
worker-2

Next we will create the block PVs using the local-storage.yaml we created:

$ oc create -f local-storage.yaml 
localvolume.local.storage.openshift.io/local-block created 

Lets validate that the pods and PVs were created:

$ oc -n local-storage get pods
NAME                                      READY   STATUS    RESTARTS   AGE
local-block-local-diskmaker-dxgwj         1/1     Running   0          42s
local-block-local-diskmaker-hrsvj         1/1     Running   0          42s
local-block-local-diskmaker-zsptk         1/1     Running   0          42s
local-block-local-provisioner-pwgfl       1/1     Running   0          42s
local-block-local-provisioner-s56f4       1/1     Running   0          42s
local-block-local-provisioner-wp8bz       1/1     Running   0          42s
local-storage-operator-5c46f48cfc-6ht8r   1/1     Running   0          15m

$ oc get pv
NAME                CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
local-pv-40d06fba   100Gi      RWO            Delete           Available           localblock              30s
local-pv-8aea98b7   100Gi      RWO            Delete           Available           localblock              28s
local-pv-e62c1b44   100Gi      RWO            Delete           Available           localblock              34s

$ oc get sc | grep localblock
localblock   kubernetes.io/no-provisioner   Delete          WaitForFirstConsumer   false                  60s
 

If everything looks good we can now return to the console and install the OCS operator.  As we did with the previous operator we will search for OpenShift Container and it will show the operator available to us:



Click on the operator and the install button will be displayed:


After clicking the install button the options for the operator will be displayed.  Select the option based on the environment.  The ones below are the default and I went with those:


After clicking install the operator will install and return to screen similar to the one below.  Notice that the AWS S3 Operator was installed as well given its a dependency for the OCS operator.


With the OCS operator installed and the local storage PVs ready we can now create our OCS cluster.   Lets use the cluster wizard within the console.  First we need to click on the OpenShift Container Storage operator in the list of installed operators:


Next we will click on the Storage Cluster heading which will display a button to the right for creating the OCS cluster service.  Click on this button:


This will bring up the following screen with the nodes labeled as OCS nodes being already checked and the amount of total raw capacity on those nodes once the local storage class is selected:


Click the create button the cluster will show a progress screen and begin to instantiate the cluster:


If we jump over to the command line we can also see pods are starting to instantiate:

$ oc get pods -n openshift-storage
NAME                                            READY   STATUS              RESTARTS   AGE
aws-s3-provisioner-85b697dd54-zcrv5             1/1     Running             0          12m
csi-cephfsplugin-646jk                          0/3     ContainerCreating   0          27s
csi-cephfsplugin-b6c66                          0/3     ContainerCreating   0          27s
csi-cephfsplugin-ppqdb                          3/3     Running             0          27s
csi-cephfsplugin-provisioner-785b9d8bd5-45cwm   0/5     ContainerCreating   0          26s
csi-cephfsplugin-provisioner-785b9d8bd5-nk9gl   0/5     ContainerCreating   0          26s
csi-cephfsplugin-rfw2m                          0/3     ContainerCreating   0          27s
csi-cephfsplugin-vpzp8                          0/3     ContainerCreating   0          27s
csi-cephfsplugin-xv484                          0/3     ContainerCreating   0          27s
csi-rbdplugin-864jc                             3/3     Running             0          27s
csi-rbdplugin-crgr4                             0/3     ContainerCreating   0          27s
csi-rbdplugin-hwtr4                             0/3     ContainerCreating   0          28s
csi-rbdplugin-provisioner-66c87f8bd4-48q6q      0/5     ContainerCreating   0          27s
csi-rbdplugin-provisioner-66c87f8bd4-js247      0/5     ContainerCreating   0          27s
csi-rbdplugin-s9fr7                             0/3     ContainerCreating   0          27s
csi-rbdplugin-t7tgb                             0/3     ContainerCreating   0          28s
csi-rbdplugin-x25f5                             0/3     ContainerCreating   0          28s
noobaa-operator-6f7d47dff6-dw62k                1/1     Running             0          12m
ocs-operator-6cf4c9fc95-qxllg                   0/1     Running             0          12m
rook-ceph-mon-a-canary-6fb7d8ff5d-bl6kq         1/1     Running             0          9s
rook-ceph-mon-b-canary-864586477c-n6tjs         0/1     ContainerCreating   0          8s
rook-ceph-mon-c-canary-cb7d94fd8-mvvzc          0/1     ContainerCreating   0          3s
rook-ceph-operator-674cfcd899-hpvmr             1/1     Running             0          12m

Once the cluster has installed the console will display the status similar to the image below:


We can also see all the pods are up and running from the command line as well:

$ oc get pods -n openshift-storage
NAME                                                              READY   STATUS      RESTARTS   AGE
aws-s3-provisioner-85b697dd54-zcrv5                               1/1     Running     0          22m
csi-cephfsplugin-646jk                                            3/3     Running     0          10m
csi-cephfsplugin-b6c66                                            3/3     Running     0          10m
csi-cephfsplugin-ppqdb                                            3/3     Running     0          10m
csi-cephfsplugin-provisioner-785b9d8bd5-45cwm                     5/5     Running     0          10m
csi-cephfsplugin-provisioner-785b9d8bd5-nk9gl                     5/5     Running     1          10m
csi-cephfsplugin-rfw2m                                            3/3     Running     0          10m
csi-cephfsplugin-vpzp8                                            3/3     Running     0          10m
csi-cephfsplugin-xv484                                            3/3     Running     0          10m
csi-rbdplugin-864jc                                               3/3     Running     0          10m
csi-rbdplugin-crgr4                                               3/3     Running     0          10m
csi-rbdplugin-hwtr4                                               3/3     Running     0          10m
csi-rbdplugin-provisioner-66c87f8bd4-48q6q                        5/5     Running     1          10m
csi-rbdplugin-provisioner-66c87f8bd4-js247                        5/5     Running     0          10m
csi-rbdplugin-s9fr7                                               3/3     Running     0          10m
csi-rbdplugin-t7tgb                                               3/3     Running     0          10m
csi-rbdplugin-x25f5                                               3/3     Running     0          10m
noobaa-core-0                                                     1/1     Running     0          7m15s
noobaa-db-0                                                       1/1     Running     0          7m15s
noobaa-endpoint-7cdcc9bdc6-85s2v                                  1/1     Running     0          5m57s
noobaa-operator-6f7d47dff6-dw62k                                  1/1     Running     0          22m
ocs-operator-6cf4c9fc95-qxllg                                     1/1     Running     0          22m
rook-ceph-crashcollector-worker-0-7fd95579db-jt85v                1/1     Running     0          8m38s
rook-ceph-crashcollector-worker-1-7f547f4dc-s9q5l                 1/1     Running     0          8m7s
rook-ceph-crashcollector-worker-2-bd6d78488-4pzz7                 1/1     Running     0          8m27s
rook-ceph-drain-canary-worker-0-64d6558fcb-x52z4                  1/1     Running     0          7m16s
rook-ceph-drain-canary-worker-1-7f8858f74-vsc6f                   1/1     Running     0          7m16s
rook-ceph-drain-canary-worker-2-5fd88c555c-2rmnx                  1/1     Running     0          7m18s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-8458f9c54l4zx   1/1     Running     0          6m59s
rook-ceph-mds-ocs-storagecluster-cephfilesystem-b-bf76744474rnz   1/1     Running     0          6m58s
rook-ceph-mgr-a-5488b747df-2bmnn                                  1/1     Running     0          7m49s
rook-ceph-mon-a-7884697548-tp5pw                                  1/1     Running     0          8m38s
rook-ceph-mon-b-7b47c5c597-5jdrv                                  1/1     Running     0          8m27s
rook-ceph-mon-c-69f945d575-4pmtn                                  1/1     Running     0          8m7s
rook-ceph-operator-674cfcd899-hpvmr                               1/1     Running     0          22m
rook-ceph-osd-0-7f98849d5-vfhsn                                   1/1     Running     0          7m18s
rook-ceph-osd-1-594cc64794-p45h2                                  1/1     Running     0          7m17s
rook-ceph-osd-2-655df8c45c-wb9g6                                  1/1     Running     0          7m16s
rook-ceph-osd-prepare-ocs-deviceset-0-0-65zj5-fzwxt               0/1     Completed   0          7m29s
rook-ceph-osd-prepare-ocs-deviceset-1-0-w2hl2-z2xns               0/1     Completed   0          7m29s
rook-ceph-osd-prepare-ocs-deviceset-2-0-bj7qz-zhsfs               0/1     Completed   0          7m29s
rook-ceph-rgw-ocs-storagecluster-cephobjectstore-a-66b5b7dh8nzc   1/1     Running     0          6m36s

We can also see a bunch of new storage classes are also available to us:

$ oc get storageclass
NAME                          PROVISIONER                             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE
localblock                    kubernetes.io/no-provisioner            Delete          WaitForFirstConsumer   false                  32m
ocs-storagecluster-ceph-rbd   openshift-storage.rbd.csi.ceph.com      Delete          Immediate              false                  12m
ocs-storagecluster-cephfs     openshift-storage.cephfs.csi.ceph.com   Delete          Immediate              false                  12m
openshift-storage.noobaa.io   openshift-storage.noobaa.io/obc         Delete          Immediate              false                  8m35s

At this point OpenShift Container Storage is ready to be consumed by OpenShift Virtualization.  However we still need to install OpenShift Virtualization.  Again we will use the console to install the operator and do so by first searching for it:


Click on the operator to bring up the install button:


Once the install button is clicked it will present the options for the OpenShift Vritualization operator:


Once the options and install have been clicked the operator will install and once complete the following screen is displayed:



Clicking on the OpenShift Virtualization operator and then clicking on CNV Operator Deployment will bring us to the following screen:


Then click on Create HyperConverged Cluster button, accept the defaults and click on create:


This will begin the process of launching the required pods for OpenShift Virtualization.  If we jump over to the command line we can see the pods spinning up:

$ oc get pods -n openshift-cnv
openshift-cnv                                      bridge-marker-4q57k                                               0/1     ContainerCreating   0          12s
openshift-cnv                                      bridge-marker-66vzn                                               1/1     Running             0          11s
openshift-cnv                                      bridge-marker-m6ll8                                               1/1     Running             0          11s
openshift-cnv                                      bridge-marker-slch2                                               0/1     ContainerCreating   0          12s
openshift-cnv                                      bridge-marker-xthw9                                               1/1     Running             0          12s
openshift-cnv                                      bridge-marker-zgvxq                                               1/1     Running             0          11s
openshift-cnv                                      cdi-apiserver-c6bfbc6b9-rjnl4                                     0/1     Running             0          13s
openshift-cnv                                      cdi-deployment-7c4fd9cb5b-rr8fk                                   0/1     ContainerCreating   0          13s
openshift-cnv                                      cdi-operator-7467d4478-9xcrm                                      1/1     Running             0          16m
openshift-cnv                                      cdi-uploadproxy-54c7876d47-wpzvf                                  0/1     ContainerCreating   0          13s
openshift-cnv                                      cluster-network-addons-operator-687c79cc7b-94fdm                  1/1     Running             0          16m
openshift-cnv                                      hco-operator-7cd98b5686-jmb8t                                     0/1     Running             0          16m
openshift-cnv                                      hostpath-provisioner-operator-5d8f6bc547-4f5rd                    1/1     Running             0          16m
openshift-cnv                                      kube-cni-linux-bridge-plugin-cd4zb                                0/1     ContainerCreating   0          15s
openshift-cnv                                      kube-cni-linux-bridge-plugin-cgpkz                                0/1     ContainerCreating   0          15s
openshift-cnv                                      kube-cni-linux-bridge-plugin-cxzvv                                1/1     Running             0          15s
openshift-cnv                                      kube-cni-linux-bridge-plugin-d9qnn                                1/1     Running             0          15s
openshift-cnv                                      kube-cni-linux-bridge-plugin-f9k7g                                1/1     Running             0          15s
openshift-cnv                                      kube-cni-linux-bridge-plugin-gj6th                                1/1     Running             0          15s
openshift-cnv                                      kubevirt-a2582644e49387db02a9524312c0792c76605518-jobczrpsrrgjc   0/1     Completed           0          16s
openshift-cnv                                      kubevirt-ssp-operator-69fbcd58f4-kpqr9                            1/1     Running             0          16m
openshift-cnv                                      nmstate-handler-kwt7d                                             0/1     ContainerCreating   0          8s
openshift-cnv                                      nmstate-handler-llj8z                                             0/1     ContainerCreating   0          8s
openshift-cnv                                      nmstate-handler-vct84                                             0/1     ContainerCreating   0          8s
openshift-cnv                                      nmstate-handler-worker-5rr47                                      0/1     ContainerCreating   0          8s
openshift-cnv                                      nmstate-handler-worker-jg4v4                                      1/1     Running             0          8s
openshift-cnv                                      nmstate-handler-worker-wjnz2                                      0/1     ContainerCreating   0          8s
openshift-cnv                                      node-maintenance-operator-6c76fbb8d-qllp6                         1/1     Running             0          16m
openshift-cnv                                      ovs-cni-amd64-2rgn7                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      ovs-cni-amd64-8rbb6                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      ovs-cni-amd64-hc5b8                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      ovs-cni-amd64-hkdjb                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      ovs-cni-amd64-hw995                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      ovs-cni-amd64-jqg8x                                               0/2     ContainerCreating   0          5s
openshift-cnv                                      virt-operator-6d766f7698-bmsdc                                    1/1     Running             0          16m
openshift-cnv                                      virt-operator-6d766f7698-m2fxl                                    1/1     Running             0          16m


Looking back at the console once all the pods are running we will be presented with the following on the screen:


And if we look back at the command line we can see all the pods running under the opensihft-cnv namespace:

$ oc get pods -n openshift-cnv
NAME                                               READY   STATUS    RESTARTS   AGE
bridge-marker-4q57k                                1/1     Running   0          11m
bridge-marker-66vzn                                1/1     Running   0          11m
bridge-marker-m6ll8                                1/1     Running   0          11m
bridge-marker-slch2                                1/1     Running   0          11m
bridge-marker-xthw9                                1/1     Running   0          11m
bridge-marker-zgvxq                                1/1     Running   0          11m
cdi-apiserver-c6bfbc6b9-rjnl4                      1/1     Running   0          11m
cdi-deployment-7c4fd9cb5b-rr8fk                    1/1     Running   0          11m
cdi-operator-7467d4478-9xcrm                       1/1     Running   0          28m
cdi-uploadproxy-54c7876d47-wpzvf                   1/1     Running   0          11m
cluster-network-addons-operator-687c79cc7b-94fdm   1/1     Running   0          28m
hco-operator-7cd98b5686-jmb8t                      1/1     Running   0          28m
hostpath-provisioner-operator-5d8f6bc547-4f5rd     1/1     Running   0          28m
kube-cni-linux-bridge-plugin-cd4zb                 1/1     Running   0          11m
kube-cni-linux-bridge-plugin-cgpkz                 1/1     Running   0          11m
kube-cni-linux-bridge-plugin-cxzvv                 1/1     Running   0          11m
kube-cni-linux-bridge-plugin-d9qnn                 1/1     Running   0          11m
kube-cni-linux-bridge-plugin-f9k7g                 1/1     Running   0          11m
kube-cni-linux-bridge-plugin-gj6th                 1/1     Running   0          11m
kubevirt-node-labeller-9947m                       1/1     Running   0          11m
kubevirt-node-labeller-ckqbn                       1/1     Running   0          11m
kubevirt-node-labeller-hn9xt                       1/1     Running   0          11m
kubevirt-node-labeller-kw5hb                       1/1     Running   0          11m
kubevirt-node-labeller-n4lxc                       1/1     Running   0          11m
kubevirt-node-labeller-vhmkk                       1/1     Running   0          11m
kubevirt-ssp-operator-69fbcd58f4-kpqr9             1/1     Running   0          28m
nmstate-handler-kwt7d                              1/1     Running   0          11m
nmstate-handler-llj8z                              1/1     Running   0          11m
nmstate-handler-vct84                              1/1     Running   0          11m
nmstate-handler-worker-5rr47                       1/1     Running   0          11m
nmstate-handler-worker-jg4v4                       1/1     Running   0          11m
nmstate-handler-worker-wjnz2                       1/1     Running   0          11m
node-maintenance-operator-6c76fbb8d-qllp6          1/1     Running   0          28m
ovs-cni-amd64-2rgn7                                2/2     Running   0          11m
ovs-cni-amd64-8rbb6                                2/2     Running   0          11m
ovs-cni-amd64-hc5b8                                2/2     Running   0          11m
ovs-cni-amd64-hkdjb                                2/2     Running   0          11m
ovs-cni-amd64-hw995                                2/2     Running   0          11m
ovs-cni-amd64-jqg8x                                2/2     Running   0          11m
virt-api-84fd45c455-2jjbb                          1/1     Running   0          11m
virt-api-84fd45c455-ct7dt                          1/1     Running   0          11m
virt-controller-74c54b549-qz4q9                    1/1     Running   0          11m
virt-controller-74c54b549-zt5qj                    1/1     Running   0          11m
virt-handler-69644                                 1/1     Running   0          11m
virt-handler-7857r                                 1/1     Running   0          11m
virt-handler-bpsjg                                 1/1     Running   0          11m
virt-handler-j48c7                                 1/1     Running   0          11m
virt-handler-rrl6d                                 1/1     Running   0          11m
virt-handler-vr9bf                                 1/1     Running   0          11m
virt-operator-6d766f7698-bmsdc                     1/1     Running   0          28m
virt-operator-6d766f7698-m2fxl                     1/1     Running   0          28m
virt-template-validator-bd96cdfcd-4qgcz            1/1     Running   0          11m
virt-template-validator-bd96cdfcd-6d27x            1/1     Running   0          11m


Finally at this point we can actually go and launch a virtual machine.   If we navigate to workloads in the console we can see there is a Virtualization option:


It is here where we can click on the create a virtual machine button to start the wizard.  In the wizard the first screen is where we set VM specific attributes: name, OS source, OS type, VM size etc.  Below is an example of my VM:


Clicking next we are presented with a networking screen.  Again options for adding various network connectivity are available here but in this example I am just going to accept the defaults:



Next we are brought to storage and by default there is no storageclass assigned to the disk.  If a default storageclass was set on the OCP cluster then it would most likely default to that but I did not have that configured.  Given that I needed to edit the disk using the three dots on the right and set the storageclass:



On the advanced screen we can set the hostname, an SSH key if we desire and also attach a virtual CD-ROM:



Finally we get to a summary screen which shows all the settings as they were chosen through the wizard.  Once reviewed we can click next and create the VM:



Once the VM is creating we can view the status of the VM on the detail page:



Waiting a little bit we can now see the VM is running:


If we jump over to the console we can see we have a login prompt:



Hopefully this gives everyone a tastes of how easy it is to configure OpenShift Container Storage and OpenShift Virtualization in a OpenShift Container Platform 4.4 environment.

Monday, April 13, 2020

Open Cluster Management Inside CodeReady Containers



Wouldn't it be great to control a set of OpenShift or Kubernetes clusters deployed on a variety of cloud and/or baremetal platforms from a centralized hub?  This is exactly what Open Cluster Management aims to achieve but getting it up and running in production requires an already deployed OpenShift or Kubernetes cluster.  In the following blog I will describe how one could configure CodeReady Containers and deploy Open Cluster Management in the environment for non production feature functionality testing purposes.

CodeReady Containers brings a minimal, preconfigured OpenShift 4.1 or newer cluster to your local laptop or desktop computer for development and testing purposes. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10.  In the example below we will be using a Red Hat Enterprise Linux 8 host with virtualization services enabled to launch our CodeReady Container environment that will eventually run Open Cluster Management.

The first step is to obtain the CodeReady Container distribution from Red Hat at the following: https://developers.redhat.com/products/codeready-containers/overview  Click the download button and it will redirect to a page where the the platform download of choice can be chosen.

Once we have downloaded the crc-linux-amd64.tar.xz we can extract it, go into the extracted directory and run crc setup:

$ tar -xf crc-linux-amd64.tar.xz
$ cd crc-linux-1.8.0-amd64/
$ crc setup
INFO Checking if oc binary is cached              
INFO Checking if podman remote binary is cached   
INFO Checking if CRC bundle is cached in '$HOME/.crc' 
INFO Checking if running as non-root              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking for obsolete crc-driver-libvirt     
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
Setup is complete, you can now run 'crc start' to start the OpenShift cluster

Now that setup is complete I want to modify a few of the configurations before we start the virtual machine.  Below we will modify the number of cpus and memory allocated to the virtual machine as the default is not enough to run Open Cluster Management in a CodeReady container:

$ crc config set cpus 6
Changes to configuration property 'cpus' are only applied when a new CRC instance is created.
If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.
$ crc config set memory 24000
Changes to configuration property 'memory' are only applied when a new CRC instance is created.
If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.

Now that we have increased our cpu and memory resources we can start the virtual machine that will run the CodeReady container which will ultimately run a single node OpenShift environment that Open Cluster Management can run on.  During the startup the image pull secret will be required which is obtained on the same download page where the CodeReady Container was retrieved.

$ crc start
INFO Checking if oc binary is cached              
INFO Checking if podman remote binary is cached   
INFO Checking if running as non-root              
INFO Checking if Virtualization is enabled        
INFO Checking if KVM is enabled                   
INFO Checking if libvirt is installed             
INFO Checking if user is part of libvirt group    
INFO Checking if libvirt is enabled               
INFO Checking if libvirt daemon is running        
INFO Checking if a supported libvirt version is installed 
INFO Checking if crc-driver-libvirt is installed  
INFO Checking if libvirt 'crc' network is available 
INFO Checking if libvirt 'crc' network is active  
INFO Checking if NetworkManager is installed      
INFO Checking if NetworkManager service is running 
INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists 
INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists 
? Image pull secret [? for help] **************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************

INFO Loading bundle: crc_libvirt_4.3.8.crcbundle ... 
INFO Checking size of the disk image /home/bschmaus/.crc/cache/crc_libvirt_4.3.8/crc.qcow2 ... 
INFO Creating CodeReady Containers VM for OpenShift 4.3.8... 
INFO Verifying validity of the cluster certificates ... 
INFO Check internal and public DNS query ...      
INFO Check DNS query from host ...                
INFO Copying kubeconfig file to instance dir ...  
INFO Adding user's pull secret ...                
INFO Updating cluster ID ...                      
INFO Starting OpenShift cluster ... [waiting 3m]  
INFO                                              
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions 
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' 
INFO To login as an admin, run 'oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443' 
INFO                                              
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console 
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation 

Before we continue lets confirm we can access our CodeReady Container OpenShift cluster:

$ crc oc-env
export PATH="/home/bschmaus/.crc/bin:$PATH"
# Run this command to configure your shell:
# eval $(crc oc-env)
$ eval $(crc oc-env)
$ oc get co
NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
authentication                             4.3.8     True        False         False      17d
cloud-credential                           4.3.8     True        False         False      17d
cluster-autoscaler                         4.3.8     True        False         False      17d
console                                    4.3.8     True        False         False      17d
dns                                        4.3.8     True        False         False      8m46s
image-registry                             4.3.8     True        False         False      17d
ingress                                    4.3.8     True        False         False      17d
insights                                   4.3.8     True        False         False      17d
kube-apiserver                             4.3.8     True        False         False      17d
kube-controller-manager                    4.3.8     True        False         False      17d
kube-scheduler                             4.3.8     True        False         False      17d
machine-api                                4.3.8     True        False         False      17d
machine-config                             4.3.8     True        False         False      17d
marketplace                                4.3.8     True        False         False      8m17s
monitoring                                 4.3.8     True        False         False      17d
network                                    4.3.8     True        False         False      17d
node-tuning                                4.3.8     True        False         False      8m42s
openshift-apiserver                        4.3.8     True        False         False      17d
openshift-controller-manager               4.3.8     True        False         False      17d
openshift-samples                          4.3.8     True        False         False      17d
operator-lifecycle-manager                 4.3.8     True        False         False      17d
operator-lifecycle-manager-catalog         4.3.8     True        False         False      17d
operator-lifecycle-manager-packageserver   4.3.8     True        False         False      8m21s
service-ca                                 4.3.8     True        False         False      17d
service-catalog-apiserver                  4.3.8     True        False         False      17d
service-catalog-controller-manager         4.3.8     True        False         False      17d
storage                                    4.3.8     True        False         False      17d


So far everything looks good so lets continue.

Another requirement for Open Cluster Management is a default storage class.  In a previous blog I discussed how to enable a Netapp Simulator in KVM.  Now I can finally use it with along with the Trident CSI driver which can be obtained here: https://github.com/NetApp/trident/releases  I am going with the current version as of this writing which is v20.01.1.  Begin by downloading the release to a host that has access to the CodeReady Container we just started.  Then extract the release:

$ tar -xzf trident-installer-20.01.1.tar.gz
$ cd trident-installer/
$ ls
backend.json  extras  pvc.yaml  sample-input  sc.yaml  tridentctl

In my directory listing I have a few files I created: backend.json which instructs the Trident driver how to talk to the Netapp appliance and the sc.yaml which I can use to define a storageclass on my CodeReady Container OpenShift environment.  Lets take a quick look at the backend.json:

{
"debug":true,
"managementLIF":"192.168.0.21",
"dataLIF":"192.168.0.22",
"svm":"test",
"backendName": "nas_backend",
"aggregate":"aggr0",
"username":"admin",
"password":"password",
"storageDriverName":"ontap-nas",
"storagePrefix":"schmaustech_",
"version":1
}

Now lets look at the sc.yaml file:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nas
provisioner: csi.trident.netapp.io
parameters:
backendType: "ontap-nas"
snapshots: "True"
provisioningType: "thin"
encryption: "true"

At this point lets login as kubeadmin and install the Trident CSI driver and then show what pods got created:

$ oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443
Login successful.

You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".

$ tridentctl install -n trident-ns
INFO Starting Trident installation.                namespace=trident-ns
INFO Created namespace.                            namespace=trident-ns
INFO Created service account.                     
INFO Created cluster role.                        
INFO Created cluster role binding.                
INFO Added security context constraint user.       scc=privileged user=trident-csi
INFO Created custom resource definitions.          namespace=trident-ns
INFO Created Trident pod security policy.         
INFO Added finalizers to custom resource definitions. 
INFO Created Trident service.                     
INFO Created Trident secret.                      
INFO Created Trident deployment.                  
INFO Created Trident daemonset.                   
INFO Waiting for Trident pod to start.            
INFO Trident pod started.                          namespace=trident-ns pod=trident-csi-d8667b7fd-sgxz2
INFO Waiting for Trident REST interface.          
INFO Trident REST interface is up.                 version=20.01.1
INFO Trident installation succeeded.
              
$ oc get pods -n trident-ns -o wide
NAME                          READY   STATUS    RESTARTS   AGE     IP               NODE                 NOMINATED NODE   READINESS GATES
trident-csi-2n5lm             2/2     Running   0          2m32s   192.168.130.11   crc-45nsk-master-0              
trident-csi-d8667b7fd-sgxz2   4/4     Running   0          2m33s   10.128.0.101     crc-45nsk-master-0              


With Trident installed we can now use the backend.json we created to configure the driver to talk to the Netapp:

$ tridentctl create backend -f backend.json -n trident-ns
+-------------+----------------+--------------------------------------+--------+---------+
|    NAME     | STORAGE DRIVER |                 UUID                 | STATE  | VOLUMES |
+-------------+----------------+--------------------------------------+--------+---------+
| nas_backend | ontap-nas      | 6567da6d-23ca-4d09-9730-9a931fe21275 | online |       0 |
+-------------+----------------+--------------------------------------+--------+---------+


With the nas backend defined and online we can go ahead and create the storageclass.  Note that we will also have to set this storage class to the default as Open Cluster Management will be looking for a default storageclass.

$ oc create -f sc.yaml
storageclass.storage.k8s.io/nas created

$ oc get storageclass
NAME   PROVISIONER             AGE
nas    csi.trident.netapp.io   6s

$ oc patch storageclass nas -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
storageclass.storage.k8s.io/nas patched

$ oc get storageclass
NAME            PROVISIONER             AGE
nas (default)   csi.trident.netapp.io   30s


At this point we have all the requirements necessary to install Open Cluster Management thus we can pivot to the process of installing Open Cluster Management.  First lets clone the repository:

$ git clone https://github.com/open-cluster-management/deploy.git
Cloning into 'deploy'...
remote: Enumerating objects: 136, done.
remote: Counting objects: 100% (136/136), done.
remote: Compressing objects: 100% (78/78), done.
remote: Total 702 (delta 88), reused 86 (delta 55), pack-reused 566
Receiving objects: 100% (702/702), 466.26 KiB | 3.67 MiB/s, done.
Resolving deltas: 100% (371/371), done.

Next we will need to  create a pull-secret.yaml that looks similar to the following however each user will need to obtain their own pull-secret.  Directions on how to obtain pull-secret can be found here: https://github.com/open-cluster-management/deploy  The pull-secret.yaml should be created under deploy/prereqs

apiVersion: v1
kind: Secret
metadata:
  name: multiclusterhub-operator-pull-secret
data:
  .dockerconfigjson: PULL-SECRET-ENCRYPTED-PASSWORD-HERE
type: kubernetes.io/dockerconfigjson

vi deploy/prereqs/pull-secret.yaml

Next export the kubeconfig for the CodeReady Container Openshift environment:

$ export KUBECONFIG=/home/bschmaus/.crc/cache/crc_libvirt_4.3.8/kubeconfig

Now lets run the start.sh script inside of the deploy directory from the Open Cluster Management repository that was cloned. Select the default snapshot or enter a known good version when prompted:

$ ./start.sh --watch
* Testing connection
* Using baseDomain: apps-crc.testing
* oc CLI Client Version: 4.3.10-202003280552-6a90d0a
OK: Default Storage Class defined
Find snapshot tags @ https://quay.io/repository/open-cluster-management/multiclusterhub-operator-index?tab=tags
Enter SNAPSHOT TAG: (Press ENTER for default: 1.0.0-SNAPSHOT-2020-03-31-02-16-43)

After accepting the default snapshot or applying a specified one the installer will move along and apply some prerequisites and then pull down the requirements for the multicluster-hub-operator and then bring them to a running state:

* Using: 1.0.0-SNAPSHOT-2020-03-31-02-16-43

* Applying SNAPSHOT to multiclusterhub-operator subscription
* Applying multicluster-hub-cr values

##### Applying prerequisites
namespace/hive created
namespace/open-cluster-management created
secret/multiclusterhub-operator-pull-secret created
Error from server (AlreadyExists): error when creating "prereqs/": serviceaccounts "default" already exists

##### Applying multicluster-hub-operator subscription #####
service/open-cluster-management-registry created
deployment.apps/open-cluster-management-registry created
operatorgroup.operators.coreos.com/default created
catalogsource.operators.coreos.com/open-cluster-management created
subscription.operators.coreos.com/multiclusterhub-operator-bundle created

#####
Wait for multiclusterhub-operator to reach running state (4min).
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: Waiting
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn           0/1     ContainerCreating   0          1s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          4s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          7s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          11s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          14s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          17s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          20s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          23s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          26s
* STATUS: multiclusterhub-operator-54d98758f5-xdhkn                         0/1     ContainerCreating   0          29s
* multiclusterhub-operator is running

* Beginning deploy...
* Applying the multiclusterhub-operator to install Red Hat Advanced Cluster Management for Kubernetes multiclusterhub.operators.open-cluster-management.io/multiclusterhub created

#####
Wait for multicluster-operators-application to reach running state (4min).
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          31s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          34s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          37s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          41s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          44s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          47s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     ContainerCreating   0          50s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               0/4     Running             0          53s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               2/4     Running   0          56s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               2/4     Running   0          59s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               2/4     Running   0          62s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               2/4     Running   0          65s
* STATUS: multicluster-operators-application-5d68b77964-swfgp               3/4     Running   0          68s
* multicluster-operators-application is running

Once the multiclusterhub-operator is up and running it will proceed to deploy the Open Cluster Management pods:

NAME                                                              READY   STATUS    RESTARTS   AGE
etcd-operator-558567f79d-g65zj                                    3/3     Running   0          79s
multicluster-operators-application-5d68b77964-swfgp               4/4     Running   0          82s
multicluster-operators-hub-subscription-85445d9d7-9qb28           1/1     Running   0          82s
multicluster-operators-standalone-subscription-845764c484-nqgps   1/1     Running   0          82s
multiclusterhub-operator-54d98758f5-xdhkn                         1/1     Running   0          84s
multiclusterhub-repo-54b6fd847c-s5md7                             1/1     Running   0          42s
open-cluster-management-registry-74657d9c7b-k5vfk                 1/1     Running   0          2m25s

Number of expected Pods : 7/35
Pods still NOT running  : 0
Detected ACM Console URL: https://


The display will turn into a display similar to a watch and will show the progress of the pods being deployed:

NAME                                                              READY   STATUS              RESTARTS   AGE
application-chart-60648-applicationui-84f666fb-zhjq5              0/1     ContainerCreating   0          30s
cert-manager-8fae3-6cd9985bd6-jtwrk                               1/1     Running             0          64s
cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774            1/1     Running             0          40s
cert-manager-webhook-85dbd49676-hc9qr                             1/1     Running             0          40s
configmap-watcher-42800-b86cbf8cb-xpw7g                           1/1     Running             0          41s
etcd-cluster-4z7s9dhx9j                                           0/1     PodInitializing     0          49s
etcd-operator-558567f79d-g65zj                                    3/3     Running             0          2m32s
grc-535c7-grcui-698dc78d6f-62bbm                                  0/1     ContainerCreating   0          31s
grc-535c7-grcuiapi-847f5df869-q62tb                               0/1     ContainerCreating   0          31s
grc-535c7-policy-postinstall-kglbr                                0/1     ContainerCreating   0          30s
grc-535c7-policy-propogator-6f8684c78-7mm8b                       0/1     ContainerCreating   0          31s
mcm-apiserver-6799bddcf5-645nd                                    0/1     ContainerCreating   0          38s
mcm-apiserver-7bc995d77-pl4qr                                     0/1     ContainerCreating   0          49s
mcm-controller-8555975b78-m9nst                                   0/1     ContainerCreating   0          49s
mcm-webhook-8475bb4fd6-8vhsb                                      0/1     ContainerCreating   0          48s
multicluster-operators-application-5d68b77964-swfgp               4/4     Running             0          2m35s
multicluster-operators-hub-subscription-85445d9d7-9qb28           1/1     Running             0          2m35s
multicluster-operators-standalone-subscription-845764c484-nqgps   1/1     Running             0          2m35s
multiclusterhub-operator-54d98758f5-xdhkn                         1/1     Running             0          2m37s
multiclusterhub-repo-54b6fd847c-s5md7                             1/1     Running             0          115s
open-cluster-management-registry-74657d9c7b-k5vfk                 1/1     Running             0          3m38s
rcm-controller-5cf46f6f6b-8c5nc                                   0/1     ContainerCreating   0          31s

Number of expected Pods : 22/35
Pods still NOT running  : 11
Detected ACM Console URL: https://


Once the installation is complete the following should be a summary of what was deployed:

NAME                                                              READY   STATUS    RESTARTS   AGE
application-chart-60648-applicationui-84f666fb-zhjq5              1/1     Running   0          5m27s
cert-manager-8fae3-6cd9985bd6-jtwrk                               1/1     Running   0          6m1s
cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774            1/1     Running   0          5m37s
cert-manager-webhook-85dbd49676-hc9qr                             1/1     Running   0          5m37s
configmap-watcher-42800-b86cbf8cb-xpw7g                           1/1     Running   0          5m38s
console-chart-eef51-consoleapi-64ff64d5b6-sjncl                   1/1     Running   0          3m21s
console-chart-eef51-consoleui-57b5955d98-kfmd2                    1/1     Running   0          3m21s
console-header-85d8f49c7b-twt9v                                   1/1     Running   0          3m21s
etcd-cluster-4z7s9dhx9j                                           1/1     Running   0          5m46s
etcd-cluster-bw25r8ph5p                                           1/1     Running   0          4m41s
etcd-cluster-mtk6fpm9bm                                           1/1     Running   0          4m9s
etcd-operator-558567f79d-g65zj                                    3/3     Running   0          7m29s
grc-535c7-grcui-698dc78d6f-8h57n                                  1/1     Running   0          2m2s
grc-535c7-grcuiapi-847f5df869-fjk55                               1/1     Running   0          2m2s
grc-535c7-policy-propogator-6f8684c78-x724n                       1/1     Running   0          2m2s
kui-web-terminal-66f6c5b89-mfd8p                                  1/1     Running   0          3m18s
management-ingress-80cda-55dcd89b87-nbkpc                         2/2     Running   0          3m21s
mcm-apiserver-7bc995d77-pl4qr                                     1/1     Running   0          5m46s
mcm-controller-8555975b78-m9nst                                   1/1     Running   0          5m46s
mcm-webhook-8475bb4fd6-8vhsb                                      1/1     Running   0          5m45s
multicluster-mongodb-0                                            1/1     Running   0          3m15s
multicluster-operators-application-5d68b77964-swfgp               4/4     Running   3          7m32s
multicluster-operators-hub-subscription-85445d9d7-9qb28           1/1     Running   0          7m32s
multicluster-operators-standalone-subscription-845764c484-nqgps   1/1     Running   0          7m32s
multiclusterhub-operator-54d98758f5-xdhkn                         1/1     Running   0          7m34s
multiclusterhub-repo-54b6fd847c-s5md7                             1/1     Running   0          6m52s
open-cluster-management-registry-74657d9c7b-k5vfk                 1/1     Running   0          8m35s
rcm-controller-5cf46f6f6b-8c5nc                                   1/1     Running   3          5m28s
search-operator-544f7c6cf6-vwrrv                                  1/1     Running   0          3m11s
search-prod-d314c-redisgraph-5c8fc4d6dc-2xx6g                     1/1     Running   0          3m11s
search-prod-d314c-search-aggregator-7c9fd68949-jk7cz              1/1     Running   0          3m11s
search-prod-d314c-search-api-7df556b7d7-xkn2k                     1/1     Running   0          3m11s
search-prod-d314c-search-collector-9b44f9f5c-pzrkb                1/1     Running   0          3m11s
topology-592b5-topology-5d5c75c484-f2str                          1/1     Running   0          3m23s
topology-592b5-topologyapi-6fdcc8dc4c-c9v49                       1/1     Running   0          3m23s

Number of expected Pods : 35/35
Pods still NOT running  : 0
Detected ACM Console URL: https://multicloud-console.apps-crc.testing


We can further validate the deployment by looking at the following two namespaces: open-cluster-management & hive

$ oc get pods -n open-cluster-management
NAME                                                              READY   STATUS    RESTARTS   AGE
application-chart-60648-applicationui-84f666fb-zhjq5              1/1     Running   0          18m
cert-manager-8fae3-6cd9985bd6-jtwrk                               1/1     Running   0          18m
cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774            1/1     Running   0          18m
cert-manager-webhook-85dbd49676-hc9qr                             1/1     Running   0          18m
configmap-watcher-42800-b86cbf8cb-xpw7g                           1/1     Running   0          18m
console-chart-eef51-consoleapi-64ff64d5b6-sjncl                   1/1     Running   0          16m
console-chart-eef51-consoleui-57b5955d98-kfmd2                    1/1     Running   0          16m
console-header-85d8f49c7b-twt9v                                   1/1     Running   0          16m
etcd-cluster-4z7s9dhx9j                                           1/1     Running   0          18m
etcd-cluster-bw25r8ph5p                                           1/1     Running   0          17m
etcd-cluster-mtk6fpm9bm                                           1/1     Running   0          17m
etcd-operator-558567f79d-g65zj                                    3/3     Running   0          20m
grc-535c7-grcui-698dc78d6f-8h57n                                  1/1     Running   0          14m
grc-535c7-grcuiapi-847f5df869-fjk55                               1/1     Running   0          14m
grc-535c7-policy-propogator-6f8684c78-x724n                       1/1     Running   0          14m
kui-web-terminal-66f6c5b89-mfd8p                                  1/1     Running   0          16m
management-ingress-80cda-55dcd89b87-nbkpc                         2/2     Running   0          16m
mcm-apiserver-7bc995d77-pl4qr                                     1/1     Running   0          18m
mcm-controller-8555975b78-m9nst                                   1/1     Running   0          18m
mcm-webhook-8475bb4fd6-8vhsb                                      1/1     Running   0          18m
multicluster-mongodb-0                                            1/1     Running   0          16m
multicluster-operators-application-5d68b77964-swfgp               4/4     Running   3          20m
multicluster-operators-hub-subscription-85445d9d7-9qb28           1/1     Running   0          20m
multicluster-operators-standalone-subscription-845764c484-nqgps   1/1     Running   0          20m
multiclusterhub-operator-54d98758f5-xdhkn                         1/1     Running   0          20m
multiclusterhub-repo-54b6fd847c-s5md7                             1/1     Running   0          19m
open-cluster-management-registry-74657d9c7b-k5vfk                 1/1     Running   0          21m
rcm-controller-5cf46f6f6b-8c5nc                                   1/1     Running   3          18m
search-operator-544f7c6cf6-vwrrv                                  1/1     Running   0          16m
search-prod-d314c-redisgraph-5c8fc4d6dc-2xx6g                     1/1     Running   0          16m
search-prod-d314c-search-aggregator-7c9fd68949-jk7cz              1/1     Running   0          16m
search-prod-d314c-search-api-7df556b7d7-xkn2k                     1/1     Running   0          16m
search-prod-d314c-search-collector-9b44f9f5c-pzrkb                1/1     Running   0          16m
topology-592b5-topology-5d5c75c484-f2str                          1/1     Running   0          16m
topology-592b5-topologyapi-6fdcc8dc4c-c9v49                       1/1     Running   0          16m
[bschmaus@cube ~]$ oc get pods -n hive
NAME                                READY   STATUS    RESTARTS   AGE
hive-controllers-74894574b5-m89xw   1/1     Running   0          15m
hive-operator-7cd7488667-jcb2m      1/1     Running   1          18m
hiveadmission-7965ffd69-dx9zg       1/1     Running   0          15m
hiveadmission-7965ffd69-mmn7t       1/1     Running   0          15m

Everything looks good from command line so lets validate one final way to confirm the installation is complete by looking at the web UI.  Since the hypervisor host that the environment is running on does not have direct network connectivity I am going to leverage VNC server on the hypervisor host to access the web UI.  I am not going to go into those details because there are plenty of documented ways to use VNC on the web.  However once a web browser is available via vncserver the kubeadmin username and password will be required to login.  Those credentials can be found in the CodeReady Container kubeadmin-password file:

$ /home/bschmaus/.crc/cache/crc_libvirt_4.3.8/kubeadmin-password

Below is an example of the login screen displayed when accessing the URL from Open Cluster Management installation log:


And once logged in we find the welcome screen:



At this point we are ready to configure and deploy a OpenShift cluster using the Open Cluster Management however I will save that for a blog another day!

Wednesday, March 25, 2020

Netapp Simulator on Red Hat KVM


Anytime one is doing integration with OpenShift or OpenStack it always seems there are storage partner use cases.  Netapp is often one of those use cases but in a lot of circumstances the actual Netapp hardware is not available for testing features and functionality.  That is the beauty of the Netapp simulator image that Netapp provides.  It allows anyone wanting to test Netapp feature functionality and in my case integration without having the real hardware.   However the drawback is I do not want to run Vmware or VirtuaBox to use the simulator.

Fortunately I have come up with a method that allows me to get the simulator up and running on a generic RHEL8 KVM hypervisor.  The following blog will walk through the process of the setup.

The first step is to get the Netapp simulator from Netapps site: https://mysupport.netapp.com

The Netapp simulator comes in a ova format which is nothing more then a glorified tar file.  So the first step is to untar the file to get access to the contents:

# ls -1
vsim-netapp-DOT9.6-cm_nodar.ova
# tar -xvf vsim-netapp-DOT9.6-cm_nodar.ova
vsim-netapp-DOT9.6-cm.ovf
vsim-netapp-DOT9.6-cm.mf
vsim-netapp-DOT9.6-cm-disk1.vmdk
vsim-netapp-DOT9.6-cm-disk2.vmdk
vsim-netapp-DOT9.6-cm-disk3.vmdk
vsim-netapp-DOT9.6-cm-disk4.vmdk

The extraction shows there are 4 vmdk disks which we now need to convert to qcow2 using qemu-img:

# qemu-img convert -f vmdk -O qcow2 vsim-netapp-DOT9.6-cm-disk1.vmdk vsim-netapp-DOT9.6-cm-disk1.qcow2
# qemu-img convert -f vmdk -O qcow2 vsim-netapp-DOT9.6-cm-disk2.vmdk vsim-netapp-DOT9.6-cm-disk2.qcow2
# qemu-img convert -f vmdk -O qcow2 vsim-netapp-DOT9.6-cm-disk3.vmdk vsim-netapp-DOT9.6-cm-disk3.qcow2
# qemu-img convert -f vmdk -O qcow2 vsim-netapp-DOT9.6-cm-disk4.vmdk vsim-netapp-DOT9.6-cm-disk4.qcow2

Next copy those converted qcow2 images to a KVM hypervisor where they are accessible.  In my case I am just going to copy them over to a system into the default storage pool location /var/lib/libvirt/images:

# cp *qcow2 /var/lib/libvirt/images
# ls -1 /var/lib/libvirt/images/vsim-netapp-DOT9.6-cm-disk[1-4].qcow2
/var/lib/libvirt/images/vsim-netapp-DOT9.6-cm-disk1.qcow2
/var/lib/libvirt/images/vsim-netapp-DOT9.6-cm-disk2.qcow2
/var/lib/libvirt/images/vsim-netapp-DOT9.6-cm-disk3.qcow2
/var/lib/libvirt/images/vsim-netapp-DOT9.6-cm-disk4.qcow2 

Now that we have the images in place we can use virt-manager to create are virtual machine.   I am using virt-manager because I want to ensure we can visually see the requirements needed for this environment.  The configuration must be exact or the underlying freebsd kernel will panic on boot.

First lets look at the CPU configuration.  For the simulator to work it requires 2 cpu cores with host-passthrough and I found manually setting the CPU topology worked.  The configuration should look like the one below:


Next are the memory requirements. Here one can configure anything more then 5GB of memory.  I used 10GB of memory in this example:


One of the most important aspects of the simulator is the networking components.  Here we need to configure 4 network interfaces.  Each network interface should use the e1000 driver.  The first two network interfaces should be configured with a hostonly network source like the example below:


The third and the fourth network interface should have their network source configured for some external network.  The third nework interface actually ends up becoming e0c inside the simulator and is the management interface.  The fourth network interface will be e0d and could be used as an access point for a vserver.  Below is an example of what the third and fourth network interfaces look like in my example:


Earlier we converted four vmdk images to qcow2 and placed them in a libvirt accessible location.  Now we need to add those disks in their numbered ordered to the virtual machine.  Each disk should be added as an IDE disk bus.  Below is an example of what the configuration should look like for each disk:


Finally switch the console from Spice to VNC as defined below:


At this point the Netapp simulator virtual machine should be ready to boot.  So lets start the virtual machine up and be ready to press Control-C when prompted to get into boot menu:


Once in the boot menu select option 4 to wipe drives and configuration:


The Netapp simulator will confirm that it should really wipe and configuration and drives:


Once confirmed the Netapp simulator will reboot the virtual machine.  Then it will go about wiping configuration and drives.  Once complete it will present the create a cluster wizard:


At this point the wizard will be used to configure the new Netapp filer just like one would if it were a real filer.