Wouldn't it be great to control a set of OpenShift or Kubernetes clusters deployed on a variety of cloud and/or baremetal platforms from a centralized hub? This is exactly what Open Cluster Management aims to achieve but getting it up and running in production requires an already deployed OpenShift or Kubernetes cluster. In the following blog I will describe how one could configure CodeReady Containers and deploy Open Cluster Management in the environment for non production feature functionality testing purposes.
CodeReady Containers brings a minimal, preconfigured OpenShift 4.1 or newer cluster to your local laptop or desktop computer for development and testing purposes. CodeReady Containers is delivered as a Red Hat Enterprise Linux virtual machine that supports native hypervisors for Linux, macOS, and Windows 10. In the example below we will be using a Red Hat Enterprise Linux 8 host with virtualization services enabled to launch our CodeReady Container environment that will eventually run Open Cluster Management.
The first step is to obtain the CodeReady Container distribution from Red Hat at the following: https://developers.redhat.com/products/codeready-containers/overview Click the download button and it will redirect to a page where the the platform download of choice can be chosen.
Once we have downloaded the crc-linux-amd64.tar.xz we can extract it, go into the extracted directory and run crc setup:
$ tar -xf crc-linux-amd64.tar.xz $ cd crc-linux-1.8.0-amd64/ $ crc setup INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if CRC bundle is cached in '$HOME/.crc' INFO Checking if running as non-root INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking for obsolete crc-driver-libvirt INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists Setup is complete, you can now run 'crc start' to start the OpenShift cluster
Now that setup is complete I want to modify a few of the configurations before we start the virtual machine. Below we will modify the number of cpus and memory allocated to the virtual machine as the default is not enough to run Open Cluster Management in a CodeReady container:
$ crc config set cpus 6 Changes to configuration property 'cpus' are only applied when a new CRC instance is created. If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'. $ crc config set memory 24000 Changes to configuration property 'memory' are only applied when a new CRC instance is created. If you already have a CRC instance, then for this configuration change to take effect, delete the CRC instance with 'crc delete' and start a new one with 'crc start'.
Now that we have increased our cpu and memory resources we can start the virtual machine that will run the CodeReady container which will ultimately run a single node OpenShift environment that Open Cluster Management can run on. During the startup the image pull secret will be required which is obtained on the same download page where the CodeReady Container was retrieved.
$ crc start INFO Checking if oc binary is cached INFO Checking if podman remote binary is cached INFO Checking if running as non-root INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if libvirt is enabled INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if /etc/NetworkManager/conf.d/crc-nm-dnsmasq.conf exists INFO Checking if /etc/NetworkManager/dnsmasq.d/crc.conf exists ? Image pull secret [? for help] ************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************** INFO Loading bundle: crc_libvirt_4.3.8.crcbundle ... INFO Checking size of the disk image /home/bschmaus/.crc/cache/crc_libvirt_4.3.8/crc.qcow2 ... INFO Creating CodeReady Containers VM for OpenShift 4.3.8... INFO Verifying validity of the cluster certificates ... INFO Check internal and public DNS query ... INFO Check DNS query from host ... INFO Copying kubeconfig file to instance dir ... INFO Adding user's pull secret ... INFO Updating cluster ID ... INFO Starting OpenShift cluster ... [waiting 3m] INFO INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443' INFO To login as an admin, run 'oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443' INFO INFO You can now run 'crc console' and use these credentials to access the OpenShift web console Started the OpenShift cluster WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
Before we continue lets confirm we can access our CodeReady Container OpenShift cluster:
$ crc oc-env export PATH="/home/bschmaus/.crc/bin:$PATH" # Run this command to configure your shell: # eval $(crc oc-env) $ eval $(crc oc-env) $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.3.8 True False False 17d cloud-credential 4.3.8 True False False 17d cluster-autoscaler 4.3.8 True False False 17d console 4.3.8 True False False 17d dns 4.3.8 True False False 8m46s image-registry 4.3.8 True False False 17d ingress 4.3.8 True False False 17d insights 4.3.8 True False False 17d kube-apiserver 4.3.8 True False False 17d kube-controller-manager 4.3.8 True False False 17d kube-scheduler 4.3.8 True False False 17d machine-api 4.3.8 True False False 17d machine-config 4.3.8 True False False 17d marketplace 4.3.8 True False False 8m17s monitoring 4.3.8 True False False 17d network 4.3.8 True False False 17d node-tuning 4.3.8 True False False 8m42s openshift-apiserver 4.3.8 True False False 17d openshift-controller-manager 4.3.8 True False False 17d openshift-samples 4.3.8 True False False 17d operator-lifecycle-manager 4.3.8 True False False 17d operator-lifecycle-manager-catalog 4.3.8 True False False 17d operator-lifecycle-manager-packageserver 4.3.8 True False False 8m21s service-ca 4.3.8 True False False 17d service-catalog-apiserver 4.3.8 True False False 17d service-catalog-controller-manager 4.3.8 True False False 17d storage 4.3.8 True False False 17d
So far everything looks good so lets continue.
Another requirement for Open Cluster Management is a default storage class. In a previous blog I discussed how to enable a Netapp Simulator in KVM. Now I can finally use it with along with the Trident CSI driver which can be obtained here: https://github.com/NetApp/trident/releases I am going with the current version as of this writing which is v20.01.1. Begin by downloading the release to a host that has access to the CodeReady Container we just started. Then extract the release:
$ tar -xzf trident-installer-20.01.1.tar.gz $ cd trident-installer/ $ ls backend.json extras pvc.yaml sample-input sc.yaml tridentctl
In my directory listing I have a few files I created: backend.json which instructs the Trident driver how to talk to the Netapp appliance and the sc.yaml which I can use to define a storageclass on my CodeReady Container OpenShift environment. Lets take a quick look at the backend.json:
{ "debug":true, "managementLIF":"192.168.0.21", "dataLIF":"192.168.0.22", "svm":"test", "backendName": "nas_backend", "aggregate":"aggr0", "username":"admin", "password":"password", "storageDriverName":"ontap-nas", "storagePrefix":"schmaustech_", "version":1 }
Now lets look at the sc.yaml file:
apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: nas provisioner: csi.trident.netapp.io parameters: backendType: "ontap-nas" snapshots: "True" provisioningType: "thin" encryption: "true"
At this point lets login as kubeadmin and install the Trident CSI driver and then show what pods got created:
$ oc login -u kubeadmin -p kKdPx-pjmWe-b3kuu-jeZm3 https://api.crc.testing:6443 Login successful. You have access to 53 projects, the list has been suppressed. You can list all projects with 'oc projects' Using project "default". $ tridentctl install -n trident-ns INFO Starting Trident installation. namespace=trident-ns INFO Created namespace. namespace=trident-ns INFO Created service account. INFO Created cluster role. INFO Created cluster role binding. INFO Added security context constraint user. scc=privileged user=trident-csi INFO Created custom resource definitions. namespace=trident-ns INFO Created Trident pod security policy. INFO Added finalizers to custom resource definitions. INFO Created Trident service. INFO Created Trident secret. INFO Created Trident deployment. INFO Created Trident daemonset. INFO Waiting for Trident pod to start. INFO Trident pod started. namespace=trident-ns pod=trident-csi-d8667b7fd-sgxz2 INFO Waiting for Trident REST interface. INFO Trident REST interface is up. version=20.01.1 INFO Trident installation succeeded. $ oc get pods -n trident-ns -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES trident-csi-2n5lm 2/2 Running 0 2m32s 192.168.130.11 crc-45nsk-master-0trident-csi-d8667b7fd-sgxz2 4/4 Running 0 2m33s 10.128.0.101 crc-45nsk-master-0
With Trident installed we can now use the backend.json we created to configure the driver to talk to the Netapp:
$ tridentctl create backend -f backend.json -n trident-ns +-------------+----------------+--------------------------------------+--------+---------+ | NAME | STORAGE DRIVER | UUID | STATE | VOLUMES | +-------------+----------------+--------------------------------------+--------+---------+ | nas_backend | ontap-nas | 6567da6d-23ca-4d09-9730-9a931fe21275 | online | 0 | +-------------+----------------+--------------------------------------+--------+---------+
With the nas backend defined and online we can go ahead and create the storageclass. Note that we will also have to set this storage class to the default as Open Cluster Management will be looking for a default storageclass.
$ oc create -f sc.yaml storageclass.storage.k8s.io/nas created $ oc get storageclass NAME PROVISIONER AGE nas csi.trident.netapp.io 6s $ oc patch storageclass nas -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' storageclass.storage.k8s.io/nas patched $ oc get storageclass NAME PROVISIONER AGE nas (default) csi.trident.netapp.io 30s
At this point we have all the requirements necessary to install Open Cluster Management thus we can pivot to the process of installing Open Cluster Management. First lets clone the repository:
$ git clone https://github.com/open-cluster-management/deploy.git Cloning into 'deploy'... remote: Enumerating objects: 136, done. remote: Counting objects: 100% (136/136), done. remote: Compressing objects: 100% (78/78), done. remote: Total 702 (delta 88), reused 86 (delta 55), pack-reused 566 Receiving objects: 100% (702/702), 466.26 KiB | 3.67 MiB/s, done. Resolving deltas: 100% (371/371), done.
Next we will need to create a pull-secret.yaml that looks similar to the following however each user will need to obtain their own pull-secret. Directions on how to obtain pull-secret can be found here: https://github.com/open-cluster-management/deploy The pull-secret.yaml should be created under deploy/prereqs
apiVersion: v1 kind: Secret metadata: name: multiclusterhub-operator-pull-secret data: .dockerconfigjson: PULL-SECRET-ENCRYPTED-PASSWORD-HERE type: kubernetes.io/dockerconfigjson vi deploy/prereqs/pull-secret.yaml
Next export the kubeconfig for the CodeReady Container Openshift environment:
$ export KUBECONFIG=/home/bschmaus/.crc/cache/crc_libvirt_4.3.8/kubeconfig
Now lets run the start.sh script inside of the deploy directory from the Open Cluster Management repository that was cloned. Select the default snapshot or enter a known good version when prompted:
$ ./start.sh --watch * Testing connection * Using baseDomain: apps-crc.testing * oc CLI Client Version: 4.3.10-202003280552-6a90d0a OK: Default Storage Class defined Find snapshot tags @ https://quay.io/repository/open-cluster-management/multiclusterhub-operator-index?tab=tags Enter SNAPSHOT TAG: (Press ENTER for default: 1.0.0-SNAPSHOT-2020-03-31-02-16-43)
After accepting the default snapshot or applying a specified one the installer will move along and apply some prerequisites and then pull down the requirements for the multicluster-hub-operator and then bring them to a running state:
* Using: 1.0.0-SNAPSHOT-2020-03-31-02-16-43 * Applying SNAPSHOT to multiclusterhub-operator subscription * Applying multicluster-hub-cr values ##### Applying prerequisites namespace/hive created namespace/open-cluster-management created secret/multiclusterhub-operator-pull-secret created Error from server (AlreadyExists): error when creating "prereqs/": serviceaccounts "default" already exists ##### Applying multicluster-hub-operator subscription ##### service/open-cluster-management-registry created deployment.apps/open-cluster-management-registry created operatorgroup.operators.coreos.com/default created catalogsource.operators.coreos.com/open-cluster-management created subscription.operators.coreos.com/multiclusterhub-operator-bundle created ##### Wait for multiclusterhub-operator to reach running state (4min). * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: Waiting * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 1s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 4s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 7s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 11s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 14s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 17s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 20s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 23s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 26s * STATUS: multiclusterhub-operator-54d98758f5-xdhkn 0/1 ContainerCreating 0 29s * multiclusterhub-operator is running * Beginning deploy... * Applying the multiclusterhub-operator to install Red Hat Advanced Cluster Management for Kubernetes multiclusterhub.operators.open-cluster-management.io/multiclusterhub created ##### Wait for multicluster-operators-application to reach running state (4min). * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 31s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 34s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 37s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 41s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 44s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 47s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 ContainerCreating 0 50s * STATUS: multicluster-operators-application-5d68b77964-swfgp 0/4 Running 0 53s * STATUS: multicluster-operators-application-5d68b77964-swfgp 2/4 Running 0 56s * STATUS: multicluster-operators-application-5d68b77964-swfgp 2/4 Running 0 59s * STATUS: multicluster-operators-application-5d68b77964-swfgp 2/4 Running 0 62s * STATUS: multicluster-operators-application-5d68b77964-swfgp 2/4 Running 0 65s * STATUS: multicluster-operators-application-5d68b77964-swfgp 3/4 Running 0 68s * multicluster-operators-application is running
Once the multiclusterhub-operator is up and running it will proceed to deploy the Open Cluster Management pods:
NAME READY STATUS RESTARTS AGE etcd-operator-558567f79d-g65zj 3/3 Running 0 79s multicluster-operators-application-5d68b77964-swfgp 4/4 Running 0 82s multicluster-operators-hub-subscription-85445d9d7-9qb28 1/1 Running 0 82s multicluster-operators-standalone-subscription-845764c484-nqgps 1/1 Running 0 82s multiclusterhub-operator-54d98758f5-xdhkn 1/1 Running 0 84s multiclusterhub-repo-54b6fd847c-s5md7 1/1 Running 0 42s open-cluster-management-registry-74657d9c7b-k5vfk 1/1 Running 0 2m25s Number of expected Pods : 7/35 Pods still NOT running : 0 Detected ACM Console URL: https://
The display will turn into a display similar to a watch and will show the progress of the pods being deployed:
NAME READY STATUS RESTARTS AGE application-chart-60648-applicationui-84f666fb-zhjq5 0/1 ContainerCreating 0 30s cert-manager-8fae3-6cd9985bd6-jtwrk 1/1 Running 0 64s cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774 1/1 Running 0 40s cert-manager-webhook-85dbd49676-hc9qr 1/1 Running 0 40s configmap-watcher-42800-b86cbf8cb-xpw7g 1/1 Running 0 41s etcd-cluster-4z7s9dhx9j 0/1 PodInitializing 0 49s etcd-operator-558567f79d-g65zj 3/3 Running 0 2m32s grc-535c7-grcui-698dc78d6f-62bbm 0/1 ContainerCreating 0 31s grc-535c7-grcuiapi-847f5df869-q62tb 0/1 ContainerCreating 0 31s grc-535c7-policy-postinstall-kglbr 0/1 ContainerCreating 0 30s grc-535c7-policy-propogator-6f8684c78-7mm8b 0/1 ContainerCreating 0 31s mcm-apiserver-6799bddcf5-645nd 0/1 ContainerCreating 0 38s mcm-apiserver-7bc995d77-pl4qr 0/1 ContainerCreating 0 49s mcm-controller-8555975b78-m9nst 0/1 ContainerCreating 0 49s mcm-webhook-8475bb4fd6-8vhsb 0/1 ContainerCreating 0 48s multicluster-operators-application-5d68b77964-swfgp 4/4 Running 0 2m35s multicluster-operators-hub-subscription-85445d9d7-9qb28 1/1 Running 0 2m35s multicluster-operators-standalone-subscription-845764c484-nqgps 1/1 Running 0 2m35s multiclusterhub-operator-54d98758f5-xdhkn 1/1 Running 0 2m37s multiclusterhub-repo-54b6fd847c-s5md7 1/1 Running 0 115s open-cluster-management-registry-74657d9c7b-k5vfk 1/1 Running 0 3m38s rcm-controller-5cf46f6f6b-8c5nc 0/1 ContainerCreating 0 31s Number of expected Pods : 22/35 Pods still NOT running : 11 Detected ACM Console URL: https://
Once the installation is complete the following should be a summary of what was deployed:
NAME READY STATUS RESTARTS AGE application-chart-60648-applicationui-84f666fb-zhjq5 1/1 Running 0 5m27s cert-manager-8fae3-6cd9985bd6-jtwrk 1/1 Running 0 6m1s cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774 1/1 Running 0 5m37s cert-manager-webhook-85dbd49676-hc9qr 1/1 Running 0 5m37s configmap-watcher-42800-b86cbf8cb-xpw7g 1/1 Running 0 5m38s console-chart-eef51-consoleapi-64ff64d5b6-sjncl 1/1 Running 0 3m21s console-chart-eef51-consoleui-57b5955d98-kfmd2 1/1 Running 0 3m21s console-header-85d8f49c7b-twt9v 1/1 Running 0 3m21s etcd-cluster-4z7s9dhx9j 1/1 Running 0 5m46s etcd-cluster-bw25r8ph5p 1/1 Running 0 4m41s etcd-cluster-mtk6fpm9bm 1/1 Running 0 4m9s etcd-operator-558567f79d-g65zj 3/3 Running 0 7m29s grc-535c7-grcui-698dc78d6f-8h57n 1/1 Running 0 2m2s grc-535c7-grcuiapi-847f5df869-fjk55 1/1 Running 0 2m2s grc-535c7-policy-propogator-6f8684c78-x724n 1/1 Running 0 2m2s kui-web-terminal-66f6c5b89-mfd8p 1/1 Running 0 3m18s management-ingress-80cda-55dcd89b87-nbkpc 2/2 Running 0 3m21s mcm-apiserver-7bc995d77-pl4qr 1/1 Running 0 5m46s mcm-controller-8555975b78-m9nst 1/1 Running 0 5m46s mcm-webhook-8475bb4fd6-8vhsb 1/1 Running 0 5m45s multicluster-mongodb-0 1/1 Running 0 3m15s multicluster-operators-application-5d68b77964-swfgp 4/4 Running 3 7m32s multicluster-operators-hub-subscription-85445d9d7-9qb28 1/1 Running 0 7m32s multicluster-operators-standalone-subscription-845764c484-nqgps 1/1 Running 0 7m32s multiclusterhub-operator-54d98758f5-xdhkn 1/1 Running 0 7m34s multiclusterhub-repo-54b6fd847c-s5md7 1/1 Running 0 6m52s open-cluster-management-registry-74657d9c7b-k5vfk 1/1 Running 0 8m35s rcm-controller-5cf46f6f6b-8c5nc 1/1 Running 3 5m28s search-operator-544f7c6cf6-vwrrv 1/1 Running 0 3m11s search-prod-d314c-redisgraph-5c8fc4d6dc-2xx6g 1/1 Running 0 3m11s search-prod-d314c-search-aggregator-7c9fd68949-jk7cz 1/1 Running 0 3m11s search-prod-d314c-search-api-7df556b7d7-xkn2k 1/1 Running 0 3m11s search-prod-d314c-search-collector-9b44f9f5c-pzrkb 1/1 Running 0 3m11s topology-592b5-topology-5d5c75c484-f2str 1/1 Running 0 3m23s topology-592b5-topologyapi-6fdcc8dc4c-c9v49 1/1 Running 0 3m23s Number of expected Pods : 35/35 Pods still NOT running : 0 Detected ACM Console URL: https://multicloud-console.apps-crc.testing
We can further validate the deployment by looking at the following two namespaces: open-cluster-management & hive
$ oc get pods -n open-cluster-management NAME READY STATUS RESTARTS AGE application-chart-60648-applicationui-84f666fb-zhjq5 1/1 Running 0 18m cert-manager-8fae3-6cd9985bd6-jtwrk 1/1 Running 0 18m cert-manager-webhook-0d3cc-cainjector-5c9846b48b-pt774 1/1 Running 0 18m cert-manager-webhook-85dbd49676-hc9qr 1/1 Running 0 18m configmap-watcher-42800-b86cbf8cb-xpw7g 1/1 Running 0 18m console-chart-eef51-consoleapi-64ff64d5b6-sjncl 1/1 Running 0 16m console-chart-eef51-consoleui-57b5955d98-kfmd2 1/1 Running 0 16m console-header-85d8f49c7b-twt9v 1/1 Running 0 16m etcd-cluster-4z7s9dhx9j 1/1 Running 0 18m etcd-cluster-bw25r8ph5p 1/1 Running 0 17m etcd-cluster-mtk6fpm9bm 1/1 Running 0 17m etcd-operator-558567f79d-g65zj 3/3 Running 0 20m grc-535c7-grcui-698dc78d6f-8h57n 1/1 Running 0 14m grc-535c7-grcuiapi-847f5df869-fjk55 1/1 Running 0 14m grc-535c7-policy-propogator-6f8684c78-x724n 1/1 Running 0 14m kui-web-terminal-66f6c5b89-mfd8p 1/1 Running 0 16m management-ingress-80cda-55dcd89b87-nbkpc 2/2 Running 0 16m mcm-apiserver-7bc995d77-pl4qr 1/1 Running 0 18m mcm-controller-8555975b78-m9nst 1/1 Running 0 18m mcm-webhook-8475bb4fd6-8vhsb 1/1 Running 0 18m multicluster-mongodb-0 1/1 Running 0 16m multicluster-operators-application-5d68b77964-swfgp 4/4 Running 3 20m multicluster-operators-hub-subscription-85445d9d7-9qb28 1/1 Running 0 20m multicluster-operators-standalone-subscription-845764c484-nqgps 1/1 Running 0 20m multiclusterhub-operator-54d98758f5-xdhkn 1/1 Running 0 20m multiclusterhub-repo-54b6fd847c-s5md7 1/1 Running 0 19m open-cluster-management-registry-74657d9c7b-k5vfk 1/1 Running 0 21m rcm-controller-5cf46f6f6b-8c5nc 1/1 Running 3 18m search-operator-544f7c6cf6-vwrrv 1/1 Running 0 16m search-prod-d314c-redisgraph-5c8fc4d6dc-2xx6g 1/1 Running 0 16m search-prod-d314c-search-aggregator-7c9fd68949-jk7cz 1/1 Running 0 16m search-prod-d314c-search-api-7df556b7d7-xkn2k 1/1 Running 0 16m search-prod-d314c-search-collector-9b44f9f5c-pzrkb 1/1 Running 0 16m topology-592b5-topology-5d5c75c484-f2str 1/1 Running 0 16m topology-592b5-topologyapi-6fdcc8dc4c-c9v49 1/1 Running 0 16m [bschmaus@cube ~]$ oc get pods -n hive NAME READY STATUS RESTARTS AGE hive-controllers-74894574b5-m89xw 1/1 Running 0 15m hive-operator-7cd7488667-jcb2m 1/1 Running 1 18m hiveadmission-7965ffd69-dx9zg 1/1 Running 0 15m hiveadmission-7965ffd69-mmn7t 1/1 Running 0 15m
Everything looks good from command line so lets validate one final way to confirm the installation is complete by looking at the web UI. Since the hypervisor host that the environment is running on does not have direct network connectivity I am going to leverage VNC server on the hypervisor host to access the web UI. I am not going to go into those details because there are plenty of documented ways to use VNC on the web. However once a web browser is available via vncserver the kubeadmin username and password will be required to login. Those credentials can be found in the CodeReady Container kubeadmin-password file:
$ /home/bschmaus/.crc/cache/crc_libvirt_4.3.8/kubeadmin-password
Below is an example of the login screen displayed when accessing the URL from Open Cluster Management installation log:
And once logged in we find the welcome screen:
At this point we are ready to configure and deploy a OpenShift cluster using the Open Cluster Management however I will save that for a blog another day!