There are so many ways to install OpenShift: Assisted Installer, UPI, IPI, Red Hat Advanced Cluster Management and ZTP. However I have always longed for a single ISO image I could just boot my physical hardware and it would form a OpenShift cluster. Well that dream is on course to become a reality with the Agent Installer a tool that can generate an ephemeral OpenShift installation image. In the following blog I will demonstrate how to use this early incarnation of the tool.
As I stated the Agent Installer generates a single ISO image that one would use to boot all of the nodes they would want to be part of a newly deployed cluster. However this current example may change some as the code gets developed and merged into the mainstream Openshift installer. However if one is interested in exploring this new method the following can be a preview of what is to come.
The first step in trying out the Agent Installer is to grab the OpenShift installer source code from Github and checkout the agent-installer branch:
$ git clone https://github.com/openshift/installer Cloning into 'installer'... remote: Enumerating objects: 204497, done. remote: Counting objects: 100% (210/210), done. remote: Compressing objects: 100% (130/130), done. remote: Total 204497 (delta 99), reused 153 (delta 70), pack-reused 204287 Receiving objects: 100% (204497/204497), 873.44 MiB | 10.53 MiB/s, done. Resolving deltas: 100% (132947/132947), done. Updating files: 100% (86883/86883), done. $ git checkout 88db7ef Updating files: 100% (23993/23993), done. Note: switching to '88db7ef'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by switching back to a branch. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -c with the switch command. Example: git switch -c <new-branch-name> Or undo this operation with: git switch - Turn off this advice by setting config variable advice.detachedHead to false HEAD is now at 88db7eff2 Fix unnecessary delays in start-cluster-installation $ git branch * (HEAD detached at 88db7eff2) master
Once we have the source code checked out we need to go ahead and build the the OpenShift install binary:
$ hack/build.sh + minimum_go_version=1.17 ++ go version ++ cut -d ' ' -f 3 + current_go_version=go1.17.7 ++ version 1.17.7 ++ IFS=. ++ printf '%03d%03d%03d\n' 1 17 7 ++ unset IFS ++ version 1.17 ++ IFS=. ++ printf '%03d%03d%03d\n' 1 17 ++ unset IFS + '[' 001017007 -lt 001017000 ']' + make -C terraform all make: Entering directory '/home/bschmaus/installer/terraform' cd providers/alicloud; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-alicloud "$path"; \ zip -1j ../../bin/terraform-provider-alicloud.zip ../../bin/terraform-provider-alicloud; adding: terraform-provider-alicloud (deflated 81%) cd providers/aws; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-aws "$path"; \ zip -1j ../../bin/terraform-provider-aws.zip ../../bin/terraform-provider-aws; adding: terraform-provider-aws (deflated 75%) cd providers/azureprivatedns; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-azureprivatedns "$path"; \ zip -1j ../../bin/terraform-provider-azureprivatedns.zip ../../bin/terraform-provider-azureprivatedns; adding: terraform-provider-azureprivatedns (deflated 62%) cd providers/azurerm; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-azurerm "$path"; \ zip -1j ../../bin/terraform-provider-azurerm.zip ../../bin/terraform-provider-azurerm; adding: terraform-provider-azurerm (deflated 77%) cd providers/azurestack; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-azurestack "$path"; \ zip -1j ../../bin/terraform-provider-azurestack.zip ../../bin/terraform-provider-azurestack; adding: terraform-provider-azurestack (deflated 64%) cd providers/google; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-google "$path"; \ zip -1j ../../bin/terraform-provider-google.zip ../../bin/terraform-provider-google; adding: terraform-provider-google (deflated 68%) cd providers/ibm; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-ibm "$path"; \ zip -1j ../../bin/terraform-provider-ibm.zip ../../bin/terraform-provider-ibm; adding: terraform-provider-ibm (deflated 67%) cd providers/ignition; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-ignition "$path"; \ zip -1j ../../bin/terraform-provider-ignition.zip ../../bin/terraform-provider-ignition; adding: terraform-provider-ignition (deflated 61%) cd providers/ironic; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-ironic "$path"; \ zip -1j ../../bin/terraform-provider-ironic.zip ../../bin/terraform-provider-ironic; adding: terraform-provider-ironic (deflated 60%) cd providers/libvirt; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-libvirt "$path"; \ zip -1j ../../bin/terraform-provider-libvirt.zip ../../bin/terraform-provider-libvirt; adding: terraform-provider-libvirt (deflated 61%) cd providers/local; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-local "$path"; \ zip -1j ../../bin/terraform-provider-local.zip ../../bin/terraform-provider-local; adding: terraform-provider-local (deflated 59%) cd providers/nutanix; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-nutanix "$path"; \ zip -1j ../../bin/terraform-provider-nutanix.zip ../../bin/terraform-provider-nutanix; adding: terraform-provider-nutanix (deflated 60%) cd providers/openstack; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-openstack "$path"; \ zip -1j ../../bin/terraform-provider-openstack.zip ../../bin/terraform-provider-openstack; adding: terraform-provider-openstack (deflated 62%) cd providers/ovirt; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-ovirt "$path"; \ zip -1j ../../bin/terraform-provider-ovirt.zip ../../bin/terraform-provider-ovirt; adding: terraform-provider-ovirt (deflated 66%) cd providers/random; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-random "$path"; \ zip -1j ../../bin/terraform-provider-random.zip ../../bin/terraform-provider-random; adding: terraform-provider-random (deflated 59%) cd providers/vsphere; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-vsphere "$path"; \ zip -1j ../../bin/terraform-provider-vsphere.zip ../../bin/terraform-provider-vsphere; adding: terraform-provider-vsphere (deflated 68%) cd providers/vsphereprivate; \ if [ -f main.go ]; then path="."; else path=./vendor/`grep _ tools.go|awk '{ print $2 }'|sed 's|"||g'`; fi; \ go build -ldflags "-s -w" -o ../../bin/terraform-provider-vsphereprivate "$path"; \ zip -1j ../../bin/terraform-provider-vsphereprivate.zip ../../bin/terraform-provider-vsphereprivate; adding: terraform-provider-vsphereprivate (deflated 69%) cd terraform; \ go build -ldflags "-s -w" -o ../bin/terraform ./vendor/github.com/hashicorp/terraform make: Leaving directory '/home/bschmaus/installer/terraform' + copy_terraform_to_mirror ++ go env GOOS ++ go env GOARCH + TARGET_OS_ARCH=linux_amd64 + rm -rf '/home/bschmaus/installer/pkg/terraform/providers/mirror/*/' + find /home/bschmaus/installer/terraform/bin/ -maxdepth 1 -name 'terraform-provider-*.zip' -exec bash -c ' providerName="$(basename "$1" | cut -d - -f 3 | cut -d . -f 1)" targetOSArch="$2" dstDir="${PWD}/pkg/terraform/providers/mirror/openshift/local/$providerName" mkdir -p "$dstDir" echo "Copying $providerName provider to mirror" cp "$1" "$dstDir/terraform-provider-${providerName}_1.0.0_${targetOSArch}.zip" ' shell '{}' linux_amd64 ';' Copying alicloud provider to mirror Copying aws provider to mirror Copying azureprivatedns provider to mirror Copying azurerm provider to mirror Copying azurestack provider to mirror Copying google provider to mirror Copying ibm provider to mirror Copying ignition provider to mirror Copying ironic provider to mirror Copying libvirt provider to mirror Copying local provider to mirror Copying nutanix provider to mirror Copying openstack provider to mirror Copying ovirt provider to mirror Copying random provider to mirror Copying vsphere provider to mirror Copying vsphereprivate provider to mirror + mkdir -p /home/bschmaus/installer/pkg/terraform/providers/mirror/terraform/ + cp /home/bschmaus/installer/terraform/bin/terraform /home/bschmaus/installer/pkg/terraform/providers/mirror/terraform/ + MODE=release ++ git rev-parse --verify 'HEAD^{commit}' + GIT_COMMIT=d74e210f30edf110764d87c8223a18b8a9952253 ++ git describe --always --abbrev=40 --dirty + GIT_TAG=unreleased-master-6040-gd74e210f30edf110764d87c8223a18b8a9952253 + DEFAULT_ARCH=amd64 + GOFLAGS=-mod=vendor + LDFLAGS=' -X github.com/openshift/installer/pkg/version.Raw=unreleased-master-6040-gd74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.Commit=d74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.defaultArch=amd64' + TAGS= + OUTPUT=bin/openshift-install + export CGO_ENABLED=0 + CGO_ENABLED=0 + case "${MODE}" in + LDFLAGS=' -X github.com/openshift/installer/pkg/version.Raw=unreleased-master-6040-gd74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.Commit=d74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.defaultArch=amd64 -s -w' + TAGS=' release' + test '' '!=' y + go generate ./data writing assets_vfsdata.go + echo ' release' + grep -q libvirt + go build -mod=vendor -ldflags ' -X github.com/openshift/installer/pkg/version.Raw=unreleased-master-6040-gd74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.Commit=d74e210f30edf110764d87c8223a18b8a9952253 -X github.com/openshift/installer/pkg/version.defaultArch=amd64 -s -w' -tags ' release' -o bin/openshift-install ./cmd/openshift-install
Once the OpenShift install binary is built we next need to create a manifests directory under the installer directory. In this manifest directory we will be creating six files that basically give the Agent Installer the blueprints of what our cluster should look like. First lets create the directory:
$ pwd /home/bschmaus/installer $ mkdir manifests
With the directory created we can move onto creating the agent cluster install resource file. This file specifies the clusters configuration such as number of control plane and/or worker nodes, the api and ingress vip and the cluster networking. In my example I will be deploying a 3 node compact cluster which referenced a cluster deployment named kni22:
$ cat << EOF > ./manifests/agent-cluster-install.yaml apiVersion: extensions.hive.openshift.io/v1beta1 kind: AgentClusterInstall metadata: name: kni22 namespace: kni22 spec: apiVIP: 192.168.0.125 ingressVIP: 192.168.0.126 clusterDeploymentRef: name: kni22 imageSetRef: name: openshift-v4.10.0 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 provisionRequirements: controlPlaneAgents: 3 workerAgents: 0 sshPublicKey: 'INSERT PUBLIC SSH KEY HERE' EOF
Next we will create the cluster deployment resource file which defines the cluster name, domain, and other details:
$ cat << EOF > ./manifests/cluster-deployment.yaml apiVersion: hive.openshift.io/v1 kind: ClusterDeployment metadata: name: kni22 namespace: kni22 spec: baseDomain: schmaustech.com clusterInstallRef: group: extensions.hive.openshift.io kind: AgentClusterInstall name: kni22-agent-cluster-install version: v1beta1 clusterName: kni22 controlPlaneConfig: servingCertificates: {} platform: agentBareMetal: agentSelector: matchLabels: bla: aaa pullSecretRef: name: pull-secret EOF
Moving on we now create the cluster image set resource file which contains OpenShift image information such as the repository and image name. This will be the version of the cluster that gets deployed in our 3 node compact cluster. In this example we are using 4.10.10:
$ cat << EOF > ./manifests/cluster-image-set.yaml apiVersion: hive.openshift.io/v1 kind: ClusterImageSet metadata: name: ocp-release-4.10.10-x86-64-for-4.10.0-0-to-4.11.0-0 spec: releaseImage: quay.io/openshift-release-dev/ocp-release:4.10.10-x86_64 EOF
Next we define the infrastructure environment file which contains information for pulling OpenShift onto the target host nodes we are deploying to:
$ cat << EOF > ./manifests/infraenv.yaml apiVersion: agent-install.openshift.io/v1beta1 kind: InfraEnv metadata: name: kni22 namespace: kni22 spec: clusterRef: name: kni22 namespace: kni22 pullSecretRef: name: pull-secret sshAuthorizedKey: 'INSERT PUBLIC SSH KEY HERE' nmStateConfigLabelSelector: matchLabels: kni22-nmstate-label-name: kni22-nmstate-label-value EOF
The next file is the nmstate configuration file and this file provides all the details for all of the host that will be booted using the ISO image we are going to create. Since we have a 3 node compact cluster to deploy we notice that in the file below we have specified three nmstate configurations. Each configuration is for a node and generates a static IP address on the nodes enp2s0 interface that matches the MAC address defined. This enables the ISO to boot up and not necessarily require DHCP in the environment which is what a lot of customers are looking for. Again my example has 3 configurations but if we had worker nodes we would add those in too. Lets go ahead and create the file:
$ cat << EOF > ./manifests/nmstateconfig.yaml --- apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig01 namespace: openshift-machine-api labels: kni22-nmstate-label-name: kni22-nmstate-label-value spec: config: interfaces: - name: enp2s0 type: ethernet state: up mac-address: 52:54:00:e7:05:72 ipv4: enabled: true address: - ip: 192.168.0.116 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.0.10 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.0.1 next-hop-interface: enp2s0 table-id: 254 interfaces: - name: "enp2s0" macAddress: 52:54:00:e7:05:72 --- apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig02 namespace: openshift-machine-api labels: kni22-nmstate-label-name: kni22-nmstate-label-value spec: config: interfaces: - name: enp2s0 type: ethernet state: up mac-address: 52:54:00:95:fd:f3 ipv4: enabled: true address: - ip: 192.168.0.117 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.0.10 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.0.1 next-hop-interface: enp2s0 table-id: 254 interfaces: - name: "enp2s0" macAddress: 52:54:00:95:fd:f3 --- apiVersion: agent-install.openshift.io/v1beta1 kind: NMStateConfig metadata: name: mynmstateconfig03 namespace: openshift-machine-api labels: kni22-nmstate-label-name: kni22-nmstate-label-value spec: config: interfaces: - name: enp2s0 type: ethernet state: up mac-address: 52:54:00:e8:b9:18 ipv4: enabled: true address: - ip: 192.168.0.118 prefix-length: 24 dhcp: false dns-resolver: config: server: - 192.168.0.10 routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.0.1 next-hop-interface: enp2s0 table-id: 254 interfaces: - name: "enp2s0" macAddress: 52:54:00:e8:b9:18 EOF
The final file we need to create is the pull-secret resource file which contains the pull-secret values so that our cluster can pull in the required OpenShift images to instantiate the cluster:
$ cat << EOF > ./manifests/pull-secret.yaml apiVersion: v1 kind: Secret type: kubernetes.io/dockerconfigjson metadata: name: pull-secret namespace: kni22 stringData: .dockerconfigjson: 'INSERT JSON FORMATTED PULL-SECRET' EOF
At this point we should now have our six required files defined to build our Agent Installer ISO:
$ ls -1 ./manifests/ agent-cluster-install.yaml cluster-deployment.yaml cluster-image-set.yaml infraenv.yaml nmstateconfig.yaml pull-secret.yaml
We are now ready to use the Openshift install binary we compiled earlier with the Agent Installer code to generate our ephemeral OpenShift ISO. We do this by issuing the following command which introduces the agent option. This in turn will read in the manifest details we generated and download the corresponding RHCOS image and then inject our details into the image writing out a file called agent.iso:
$ bin/openshift-install agent create image INFO adding MAC interface map to host static network config - Name: enp2s0 MacAddress: 52:54:00:e7:05:72 INFO adding MAC interface map to host static network config - Name: enp2s0 MacAddress: 52:54:00:95:fd:f3 INFO adding MAC interface map to host static network config - Name: enp2s0 MacAddress: 52:54:00:e8:b9:18 INFO[0000] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO[0000] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO[0001] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO[0001] Start configuring static network for 3 hosts pkg=manifests INFO[0001] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO[0001] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO[0001] Adding NMConnection file <enp2s0 .nmconnection=""> pkg=manifests INFO Obtaining RHCOS image file from 'https://rhcos-redirector.apps.art.xq1c.p1.openshiftapps.com/art/storage/releases/rhcos-4.11/411.85.202203181601-0/x86_64/rhcos-411.85.202203181601-0-live.x86_64.iso' INFO
Once the agent create image command completes we are left with a agent.iso image which is in fact our OpenShift install ISO:
$ ls -l ./output/ total 1073152 -rw-rw-r--. 1 bschmaus bschmaus 1098907648 May 20 08:55 agent.iso
Since the nodes I will be using to demonstrate this 3 node compact cluster are virtual machines all on the same KVM hypervisor I will go ahead and copy the agent.iso image over to that host:
$ scp ./output/agent.iso root@192.168.0.22:/var/lib/libvirt/images/ root@192.168.0.22's password: agent.iso
With the image moved over to the hypervisor host I went ahead and ensured each virtual machine we are using (asus3-vm[1-3]) has the image set. Further the hosts are designed boot off the ISO if the disk is empty. We can confirm everything is ready with the following output:
# virsh list --all Id Name State ---------------------------- - asus3-vm1 shut off - asus3-vm2 shut off - asus3-vm3 shut off - asus3-vm4 shut off - asus3-vm5 shut off - asus3-vm6 shut off # virsh domblklist asus3-vm1 Target Source --------------------------------------------------- sda /var/lib/libvirt/images/asus3-vm1.qcow2 sdb /var/lib/libvirt/images/agent.iso # virsh domblklist asus3-vm2 Target Source --------------------------------------------------- sda /var/lib/libvirt/images/asus3-vm2.qcow2 sdb /var/lib/libvirt/images/agent.iso # virsh domblklist asus3-vm3 Target Source --------------------------------------------------- sda /var/lib/libvirt/images/asus3-vm3.qcow2 sdb /var/lib/libvirt/images/agent.iso
# virsh start asus3-vm1 Domain asus3-vm1 started
Once the first virtual machine is started we can switch over to the console and watch it boot up:
During the boot process the system will come up to a standard login prompt on the console. Then in the background on the host it will start pulling in the required containers to run the familiar Assisted Installer UI. I gave this process about 5 minutes before I attempted to access the web UI. To access the web UI we can point our browser to the ipaddress of node we just booted and port 8080:
We should see a visible kni22 cluster with a status of draft because no nodes have been associated to it yet. Next we will click on kni22 to bring us into the configuration:
We can see the familiar Assisted Installer discovery screen and we can also see our first host is listed. At this point lets turn on the other two nodes that will make up our 3 node compact cluster and let them also boot from the agent ISO we created.
After the other two nodes have booted we should see them appear in the web UI. We also can see that the node names are all localhost. This was due to the fact I set static IP addresses in the nmstate.yaml above. If we had gone with DHCP the names would have been set by DHCP. Nevertheless though we can go ahead and edit each hostname and set it to the proper name and click next to continue:
There will be an additional configuration page where other configuration items will be set and could be changed if needed but we will click next through that screen to bring us to the summary page:
At this point the cluster installation begins. I should point out however we will not be able to watch the installation complete from the web UI. The reason being is that the other two nodes will get their RHCOS images written to disk, reboot and then instantiate part of the cluster. At that point the first node, the one running the web UI, will also get its RHCOS image written to disk and reboot. After that the web UI is not longer available to watch. With that in mind I recommend grabbing the kubeconfig for the cluster by clicking on the download kubeconfig button.
Once the web UI is no longer accessible we can monitor the installation from command line using the kubeconfig we downloaded. First lets see where the nodes are at:
$ export KUBECONFIG=/home/bschmaus/kubeconfig-kni22 $ oc get nodes NAME STATUS ROLES AGE VERSION asus3-vm1 Ready master,worker 2m v1.23.5+9ce5071 asus3-vm2 Ready master,worker 29m v1.23.5+9ce5071 asus3-vm3 Ready master,worker 29m v1.23.5+9ce5071
All the nodes are in a ready state and marked as both a control node and worker. Now lets see where the cluster operators are at:
$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.10.10 False True True 18m WellKnownAvailable: The well-known endpoint is not yet available: need at least 3 kube-apiservers, got 2 baremetal 4.10.10 True False False 17m cloud-controller-manager 4.10.10 True False False 29m cloud-credential 4.10.10 True False False 34m cluster-autoscaler 4.10.10 True False False 16m config-operator 4.10.10 True False False 18m console 4.10.10 True False False 4m33s csi-snapshot-controller 4.10.10 True False False 18m dns 4.10.10 True False False 17m etcd 4.10.10 True True False 16m NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 4; 0 nodes have achieved new revision 5 image-registry 4.10.10 True False False 9m44s ingress 4.10.10 True False False 11m insights 4.10.10 True False False 12m kube-apiserver 4.10.10 True True False 4m29s NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6 kube-controller-manager 4.10.10 True True False 14m NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 7 kube-scheduler 4.10.10 True True False 14m NodeInstallerProgressing: 1 nodes are at revision 0; 2 nodes are at revision 6 kube-storage-version-migrator 4.10.10 True False False 18m machine-api 4.10.10 True False False 7m53s machine-approver 4.10.10 True False False 17m machine-config 4.10.10 True False False 17m marketplace 4.10.10 True False False 16m monitoring 4.10.10 True False False 5m52s network 4.10.10 True True False 19m DaemonSet "openshift-multus/network-metrics-daemon" is not available (awaiting 1 nodes)... node-tuning 4.10.10 True False False 15m openshift-apiserver 4.10.10 True False False 4m45s openshift-controller-manager 4.10.10 True False False 15m openshift-samples 4.10.10 True False False 7m37s operator-lifecycle-manager 4.10.10 True False False 17m operator-lifecycle-manager-catalog 4.10.10 True False False 17m operator-lifecycle-manager-packageserver 4.10.10 True False False 11m service-ca 4.10.10 True False False 19m storage 4.10.10 True False False 19m
The cluster operators are still rolling out so lets give it a few more minutes and we will check again:
$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE authentication 4.10.10 True False False 13m baremetal 4.10.10 True False False 31m cloud-controller-manager 4.10.10 True False False 43m cloud-credential 4.10.10 True False False 49m cluster-autoscaler 4.10.10 True False False 31m config-operator 4.10.10 True False False 33m console 4.10.10 True False False 19m csi-snapshot-controller 4.10.10 True False False 33m dns 4.10.10 True False False 32m etcd 4.10.10 True False False 31m image-registry 4.10.10 True False False 24m ingress 4.10.10 True False False 26m insights 4.10.10 True False False 27m kube-apiserver 4.10.10 True False False 19m kube-controller-manager 4.10.10 True False False 29m kube-scheduler 4.10.10 True False False 28m kube-storage-version-migrator 4.10.10 True False False 33m machine-api 4.10.10 True False False 22m machine-approver 4.10.10 True False False 32m machine-config 4.10.10 True False False 32m marketplace 4.10.10 True False False 31m monitoring 4.10.10 True False False 20m network 4.10.10 True False False 34m node-tuning 4.10.10 True False False 30m openshift-apiserver 4.10.10 True False False 19m openshift-controller-manager 4.10.10 True False False 29m openshift-samples 4.10.10 True False False 22m operator-lifecycle-manager 4.10.10 True False False 32m operator-lifecycle-manager-catalog 4.10.10 True False False 32m operator-lifecycle-manager-packageserver 4.10.10 True False False 26m service-ca 4.10.10 True False False 34m storage 4.10.10 True False False 34m
At this point our cluster installation is completed. However I forgot to mention that while the web UI was up we should have ssh'd to the bootstrap node and shelled into the assisted installer container running to retrieve our kubeadmin password under the /data directory. However I purposely skipped that part so I could show how we can just reset the kubeadmin password instead.
First I want to thank Andrew Block and his write up on how to do this here. So lets go ahead and create the kubeadmin-rotate.go file here in the kuberotate directory we create:
$ mkdir ~/kuberotate $cd ~/kuberotate $ cat << EOF > ./kubeadmin-rotate.go
package main import ( "fmt" "crypto/rand" "golang.org/x/crypto/bcrypt" b64 "encoding/base64" "math/big" ) // generateRandomPasswordHash generates a hash of a random ASCII password // 5char-5char-5char-5char func generateRandomPasswordHash(length int) (string, string, error) { const ( lowerLetters = "abcdefghijkmnopqrstuvwxyz" upperLetters = "ABCDEFGHIJKLMNPQRSTUVWXYZ" digits = "23456789" all = lowerLetters + upperLetters + digits ) var password string for i := 0; i < length; i++ { n, err := rand.Int(rand.Reader, big.NewInt(int64(len(all)))) if err != nil { return "", "", err } newchar := string(all[n.Int64()]) if password == "" { password = newchar } if i < length-1 { n, err = rand.Int(rand.Reader, big.NewInt(int64(len(password)+1))) if err != nil { return "", "",err } j := n.Int64() password = password[0:j] + newchar + password[j:] } } pw := []rune(password) for _, replace := range []int{5, 11, 17} { pw[replace] = '-' } bytes, err := bcrypt.GenerateFromPassword([]byte(string(pw)), bcrypt.DefaultCost) if err != nil { return "", "",err } return string(pw), string(bytes), nil } func main() { password, hash, err := generateRandomPasswordHash(23) if err != nil { fmt.Println(err.Error()) return } fmt.Printf("Actual Password: %s\n", password) fmt.Printf("Hashed Password: %s\n", hash) fmt.Printf("Data to Change in Secret: %s\n", b64.StdEncoding.EncodeToString([]byte(hash))) } EOF
Next lets go ahead and initialize our go project:
$ go mod init kuberotate go: creating new go.mod: module kuberotate
With the project initialized lets go ahead and pull in the module dependencies by executing a go mod tidy which will pull in the bcrypt module:
$ go mod tidy go: finding module for package golang.org/x/crypto/bcrypt go: found golang.org/x/crypto/bcrypt in golang.org/x/crypto v0.0.0-20220518034528-6f7dac969898
And finally since I just want to run the program instead of compile it I will just run a go run kubeadmin-rotate.go which will print out the password, a hashed password and a base64 encoded version of the hashed password:
$ go run kubeadmin-rotate.go Actual Password: gWdYr-62GLh-QIynG-Boj7n Hashed Password: $2a$10$DN48Jp4YkuEEVMWZNyOR2.LkLn1ZZOJOtzR8c9detf1lVAQ2iVQGK Data to Change in Secret: JDJhJDEwJERONDhKcDRZa3VFRVZNV1pOeU9SMi5Ma0xuMVpaT0pPdHpSOGM5ZGV0ZjFsVkFRMmlWUUdL
The last step is to patch the kubeadmin secret with the hashed password that was base64 encoded:
$ oc patch secret -n kube-system kubeadmin --type json -p '[{"op": "replace", "path": "/data/kubeadmin", "value": "JDJhJDEwJERONDhKcDRZa3VFRVZNV1pOeU9SMi5Ma0xuMVpaT0pPdHpSOGM5ZGV0ZjFsVkFRMmlWUUdL"}]' secret/kubeadmin patched
Now we can go over to the OpenShift console and see if we can login. And sure enough with the password we had above we can and confirm our 3 node OpenShift cluster installed by the agent installer is ready to be used for workloads:
Hopefully this blog was useful to provide a preview of what the agent installer will look like. Keep in mind the code is under rapid development and so things could change but change is always good!