Red Hat’s provided SaaS portal to deploy an OCP cluster either on baremetal or virtual machines.
In this article however I want to demonstrate a Single Node OpenShift deployment without using the UI web interface and instead rely on the underlying REST API that drives the Assisted Installer. This can be useful for automating the deployment of clusters without user intervention.
The first step to achieve this will be to obtain an OpenShift Cluster Manager API Token. This token provides the ability to authenticate against your Red Hat OpenShift Cluster Manager account without the need of a username or password.
Place this token into a file called ocm-token:
$ echo "Token String From OCM API Token Link Above" > ~/ocm-token
Next lets set some variables that we will refer to throughout this deployment process:
export OFFLINE_ACCESS_TOKEN=$(cat ~/ocm-token) # Loading my token into a variable export ASSISTED_SERVICE_API="api.openshift.com" # Setting the Assisted Installer API endpoint export CLUSTER_VERSION="4.8" # OpenShift version export CLUSTER_IMAGE="quay.io/openshift-release-dev/ocp-release:4.8.2-x86_64" # OpenShift Quay image version export CLUSTER_NAME="kni1" # OpenShift cluster name export CLUSTER_DOMAIN="schmaustech.com" # Domain name where my cluster will be deployed export CLUSTER_NET_TYPE="OVNKubernetes" # Network type to deploy with OpenShift export MACHINE_CIDR_NET="192.168.0.0/24" # Machine CIDR network export SNO_STATICIP_NODE_NAME="master-0" # Node name of my SNO node export PULL_SECRET=$(cat ~/pull-secret.json | jq -R .) # Loading my pull-secret into variable export CLUSTER_SSHKEY=$(cat ~/.ssh/id_rsa.pub) # Loading the public key into variable
With the primary variables set lets go ahead and create a deployment.json file. This file will reference some of the variables we set previously and also have a few that are statically set. The key one to notice in this deployment is the high_availability_mode. Having that variable set to None ensures we are doing a Single Node OpenShift (SNO) deployment:
cat << EOF > ~/deployment.json { "kind": "Cluster", "name": "$CLUSTER_NAME", "openshift_version": "$CLUSTER_VERSION", "ocp_release_image": "$CLUSTER_IMAGE", "base_dns_domain": "$CLUSTER_DOMAIN", "hyperthreading": "all", "user_managed_networking": true, "vip_dhcp_allocation": false, "high_availability_mode": "None", "hosts": [], "ssh_public_key": "$CLUSTER_SSHKEY", "pull_secret": $PULL_SECRET, "network_type": "OVNKubernetes" } EOF
Now that we have the deployment.json file created lets refresh our bearer token:
$ export TOKEN=$(curl \ --silent \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=${OFFLINE_ACCESS_TOKEN}" \ https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token | \ jq -r .access_token)
With the token refereshed lets go ahead and create our deployment via the assisted installer REST API using curl and a post command. When the command completes the output will only be a cluster id with some quotes on it. I used sed to clean off the quotes so we end up with just the UUID number. Note that the cluster configuration has only been created at this point but not installed.
$ export CLUSTER_ID=$( curl -s -X POST "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters" \ -d @./deployment.json \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ | jq '.id' ) $ export CLUSTER_ID=$( sed -e 's/^"//' -e 's/"$//' <<<"$CLUSTER_ID")
$ echo $CLUSTER_ID e85fc7d5-f274-4359-acc5-48044fc67132
At this point we need to generate a discovery iso for the SNO node to be booted from. However before we do that I wanted to make sure that my SNO node was using a static IP address instead of the default of DHCP. To do this we need to create a data file that contains the information on how the static IP should be set. NMState will take this information when applied to the OCP node during the installation. Below we have defined some arguments that provide a mac interface map and a NMState yaml file. All of this information gets pushed into the DATA variable which is just pointing to a temp file.
$ DATA=$(mktemp) $ jq -n --arg SSH_KEY "$CLUSTER_SSHKEY" --arg NMSTATE_YAML1 "$(cat ~/sno-server.yaml)" \ '{ "ssh_public_key": $SSH_KEY, "image_type": "full-iso", "static_network_config": [ { "network_yaml": $NMSTATE_YAML1, "mac_interface_map": [{"mac_address": "52:54:00:82:23:e2", "logical_nic_name": "ens9"}] } ] }' >> $DATA
The sno-server.yaml used in the NMState argument looks like the following below. It contains the IP address, mask, interface and route information.
$ cat ~/sno-server.yaml dns-resolver: config: server: - 192.168.0.10 interfaces: - ipv4: address: - ip: 192.168.0.204 prefix-length: 24 dhcp: false enabled: true name: ens9 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.0.1 next-hop-interface: ens9 table-id: 254
We can confirm that the DATA was set appropriately by looking at the DATA variable and then cat out the tmp file it points to:
$ echo $DATA /tmp/tmp.3Jqw7lU6Qf $ cat /tmp/tmp.3Jqw7lU6Qf { "ssh_public_key": "SSHKEY REDACTED", "image_type": "full-iso", "static_network_config": [ { "network_yaml": "dns-resolver:\n config:\n server:\n - 192.168.0.10\ninterfaces:\n- ipv4:\n address:\n - ip: 192.168.0.204\n prefix-length: 24\n dhcp: false\n enabled: true\n name: ens9\n state: up\n type: ethernet\nroutes:\n config:\n - destination: 0.0.0.0/0\n next-hop-address: 192.168.0.1\n next-hop-interface: ens9\n table-id: 254", "mac_interface_map": [ { "mac_address": "52:54:00:82:23:e2", "logical_nic_name": "ens9" } ] } ] }
With the static IP configuration set we can go ahead and generate our discovery ISO with another curl post command. The command will generate quite a bit of output but our main concern is visually seeing the section where the static network configuration gets defined:
$ curl -X POST \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d @$DATA
(...) "static_network_config":"dns-resolver:\n config:\n server:\n - 192.168.0.10\ninterfaces:\n- ipv4:\n address:\n - ip: 192.168.0.204\n prefix-length: 24\n dhcp: false\n enabled: true\n name: ens9\n state: up\n type: ethernet\nroutes:\n config:\n - destination: 0.0.0.0/0\n next-hop-address: 192.168.0.1\n next-hop-interface: ens9\n table-id: 254HHHHH52:54:00:82:23:e2=ens9","type":"full-iso"} (...)
Now that the discovery image has been created lets go ahead and download that image:
$ curl -L \ "http://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/image" \ -o ~/discovery-image-$CLUSTER_NAME.iso \ -H "Authorization: Bearer $TOKEN" % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 984M 100 984M 0 0 10.4M 0 0:01:34 0:01:34 --:--:-- 10.5M
Now that the image is downloaded we can move it to where we need to boot the SNO node machine. This node could be baremetal or in my case a virtual machine. If it was baremetal for example, like a Dell, we might use racadm to do a virtual media mount and then ipmitool to power on the server. In my case since I am using a virtual machine, I need to do a couple things. First I copy the image over to my KVM hypervisor host. Next I ensure the power is off on my virtual machine. I can use ipmitool here because I am leveraging virtual BMC. Next I use the virsh command to change the media to my ISO that I moved over. I format the disk image on my virtual machine that way I do not have to mess around with boot order as the primary disk will be skipped because its empty and the CDROM will boot. And finally I power on the host to initiate the discover phase. At this point we have to wait for the node to boot up and report in what was discovered from an introspection perspective. I usually wait 5 minutes before proceeding hence why I have the sleep command.
$ scp ~/discovery-image-kni1.iso root@192.168.0.5:/slowdata/images/ $ /usr/bin/ipmitool -I lanplus -H192.168.0.10 -p6252 -Uadmin -Ppassword chassis power off $ ssh root@192.168.0.5 "virsh change-media rhacm-master-0 hda /slowdata/images/discovery-image-kni1.iso" $ ssh root@192.168.0.5 "virt-format --format=raw --partition=none -a /fastdata2/images/master-0.img" $ /usr/bin/ipmitool -I lanplus -H192.168.0.10 -p6252 -Uadmin -Ppassword chassis power on $ sleep 300
After 5 minutes the node should have reporting in to the Assisted Installer portal. And inventory of the hardware of the machine and capabilities is provided in the portal. We can now proceed with the deployment.
First though we need to ensure the hostname is set correctly. With DHCP it was automatically being set but since we used a static IP I found I needed to set it manually. To do this we will patch the installation and set the requested_hostname:
$ curl -X PATCH \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d "{ \"requested_hostname\": \"$SNO_STATICIP_NODE_NAME.$CLUSTER_NAME.$CLUSTER_DOMAIN\"}" | jq
(...) "requested_hostname": "master-0.kni1.schmaustech.com", (...)
We also need to patch the machine network to the appropriate network:
$ curl -X PATCH \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ -d "{ \"machine_network_cidr\": \"$MACHINE_CIDR_NET\"}" | jq
(...) "machine_network_cidr": "192.168.0.0/24", (...)
Finally after all of the preparation we can finally run the curl post command that actually starts the installation process:
$ curl -X POST \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/actions/install" \ -H "accept: application/json" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" | jq
(...) "status": "preparing-for-installation", "status_info": "Preparing cluster for installation", "status_updated_at": "2021-08-06T20:56:17.565Z", (...)
The installation process does take about 60 minutes or so minutes to complete so go grab lunch or a cup of coffee.
After 60 minutes or so we can check and see if the cluster is installed or still in progress. The first thing we should do though is refresh our token again:
$ export TOKEN=$(curl \ --silent \ --data-urlencode "grant_type=refresh_token" \ --data-urlencode "client_id=cloud-services" \ --data-urlencode "refresh_token=${OFFLINE_ACCESS_TOKEN}" \ https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token | \ jq -r .access_token)
After we have refreshed our token lets go ahead and confirm if indeed the cluster has finished installing. We can achieve this by doing a curl get against the cluster ID. There will be a lot of output but we are specifically looking for the status and status_info lines:
$ curl -s -X GET \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN" \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID" | jq . (...) "status": "installed", "status_info": "Cluster is installed", "status_updated_at": "2021-08-06T21:45:04.375Z", (...)
From the output above my cluster has completed so now I can pull my kubeconfig down and redirect it to a file:
$ curl -s -X GET \ "https://$ASSISTED_SERVICE_API/api/assisted-install/v1/clusters/$CLUSTER_ID/downloads/kubeconfig" > kubeconfig-kni1 \ -H "Content-Type: application/json" \ -H "Authorization: Bearer $TOKEN"
Now lets export the kubeconfig variable and look at the cluster with some oc commands:
$ export KUBECONFIG=~/kubeconfig-kni1 $ oc get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master-0.kni1.schmaustech.com Ready master,worker 156m v1.21.1+051ac4f 192.168.0.204 none Red Hat Enterprise Linux CoreOS 48.84.202107202156-0 (Ootpa) 4.18.0-305.10.2.el8_4.x86_64 cri-o://1.21.2-5.rhaos4.8.gitb27d974.el8 $ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.2 True False False 134m baremetal 4.8.2 True False False 144m cloud-credential 4.8.2 True False False 147m cluster-autoscaler 4.8.2 True False False 146m config-operator 4.8.2 True False False 151m console 4.8.2 True False False 135m csi-snapshot-controller 4.8.2 True False False 147m dns 4.8.2 True False False 144m etcd 4.8.2 True False False 146m image-registry 4.8.2 True False False 141m ingress 4.8.2 True False False 141m insights 4.8.2 True False False 135m kube-apiserver 4.8.2 True False False 143m kube-controller-manager 4.8.2 True False False 143m kube-scheduler 4.8.2 True False False 143m kube-storage-version-migrator 4.8.2 True False False 151m machine-api 4.8.2 True False False 146m machine-approver 4.8.2 True False False 147m machine-config 4.8.2 True False False 143m marketplace 4.8.2 True False False 144m monitoring 4.8.2 True False False 138m network 4.8.2 True False False 152m node-tuning 4.8.2 True False False 146m openshift-apiserver 4.8.2 True False False 143m openshift-controller-manager 4.8.2 True False False 146m openshift-samples 4.8.2 True False False 142m operator-lifecycle-manager 4.8.2 True False False 144m operator-lifecycle-manager-catalog 4.8.2 True False False 147m operator-lifecycle-manager-packageserver 4.8.2 True False False 144m service-ca 4.8.2 True False False 151m storage 4.8.2 True False False 146m
Everything looks good with this example Single Node OpenShift installation! If one is interested in pursuing more complex examples it might be worth looking at what is available with the Assisted Installer REST API. To do that take a look at this swagger.yaml file and use it with the online Swagger Editor.