In previous version of OpenShift if one wanted to switch from OpenShiftSDN to OVNKubernetes there was no migration path and hence the cluster needed to be reinstalled. That burden is about to become a thing of the past with OpenShift 4.8 because in this version and beyond the ability to migrate the cluster without re-installation is now possible. In the following details below I will outline and show an example of how this process looks on a working OpenShift cluster.
First lets cover some basics about the environment setup. I have a five node OpenShift 4.8.0-fc.7 pre-release cluster. Three of those nodes are masters and the other two are worker nodes in a baremetal IPI deployment. My current network type is configured as OpenShiftSDN.
First lets validate that the current cluster is running without issues. We can start by confirming all the nodes are in a ready state:
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0.n6s2d.dynamic.opentlc.com Ready master 31m v1.21.0-rc.0+4b2b6ff master-1.n6s2d.dynamic.opentlc.com Ready master 31m v1.21.0-rc.0+4b2b6ff master-2.n6s2d.dynamic.opentlc.com Ready master 31m v1.21.0-rc.0+4b2b6ff worker-0.n6s2d.dynamic.opentlc.com Ready worker 13m v1.21.0-rc.0+4b2b6ff worker-1.n6s2d.dynamic.opentlc.com Ready worker 8m53s v1.21.0-rc.0+4b2b6ff
$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-fc.7 True False False 51s baremetal 4.8.0-fc.7 True True False 29m cloud-credential 4.8.0-fc.7 True False False 43m cluster-autoscaler 4.8.0-fc.7 True False False 29m config-operator 4.8.0-fc.7 True False False 30m console 4.8.0-fc.7 True False False 7m30s csi-snapshot-controller 4.8.0-fc.7 True False False 15m dns 4.8.0-fc.7 True False False 29m etcd 4.8.0-fc.7 True False False 28m image-registry 4.8.0-fc.7 True False False 25m ingress 4.8.0-fc.7 True False False 12m insights 4.8.0-fc.7 True False False 23m kube-apiserver 4.8.0-fc.7 True False False 27m kube-controller-manager 4.8.0-fc.7 True False False 27m kube-scheduler 4.8.0-fc.7 True False False 28m kube-storage-version-migrator 4.8.0-fc.7 True False False 17m machine-api 4.8.0-fc.7 True False False 24m machine-approver 4.8.0-fc.7 True False False 30m machine-config 4.8.0-fc.7 True False False 29m marketplace 4.8.0-fc.7 True False False 29m monitoring 4.8.0-fc.7 True False False 8m9s network 4.8.0-fc.7 True False False 30m node-tuning 4.8.0-fc.7 True False False 29m openshift-apiserver 4.8.0-fc.7 True False False 15m openshift-controller-manager 4.8.0-fc.7 True False False 28m openshift-samples 4.8.0-fc.7 True False False 25m operator-lifecycle-manager 4.8.0-fc.7 True False False 29m operator-lifecycle-manager-catalog 4.8.0-fc.7 True False False 29m operator-lifecycle-manager-packageserver 4.8.0-fc.7 True False False 26m service-ca 4.8.0-fc.7 True False False 30m storage 4.8.0-fc.7 True False False 30m
$ oc get pods -n openshift-sdn NAME READY STATUS RESTARTS AGE sdn-controller-chcvr 1/1 Running 3 31m sdn-controller-wzhdz 1/1 Running 0 31m sdn-controller-zp4qr 1/1 Running 0 31m sdn-czhf6 2/2 Running 0 9m14s sdn-fvnt6 2/2 Running 0 31m sdn-njcfn 2/2 Running 0 14m sdn-nmpt2 2/2 Running 0 31m sdn-zj2fz 2/2 Running 0 31m
$ oc get Network.config.openshift.io cluster -o yaml > cluster-openshift-sdn.yaml $
$ oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ "spec": { "migration": {"networkType": "OVNKubernetes" } } }' network.operator.openshift.io/cluster patched
$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-1309c87d08601c58ddf8abd9bb08a7ea False True False 3 0 0 0 34m worker rendered-worker-67d75dc2ab02f295344703f157da14d7 False True False 2 0 0 0 34m $ oc describe node | egrep "hostname|machineconfig" kubernetes.io/hostname=master-0.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-1309c87d08601c58ddf8abd9bb08a7ea machineconfiguration.openshift.io/desiredConfig: rendered-master-1309c87d08601c58ddf8abd9bb08a7ea machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=master-1.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-1309c87d08601c58ddf8abd9bb08a7ea machineconfiguration.openshift.io/desiredConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Working kubernetes.io/hostname=master-2.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-1309c87d08601c58ddf8abd9bb08a7ea machineconfiguration.openshift.io/desiredConfig: rendered-master-1309c87d08601c58ddf8abd9bb08a7ea machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=worker-0.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67d75dc2ab02f295344703f157da14d7 machineconfiguration.openshift.io/desiredConfig: rendered-worker-e17471318b8b3a61fc931cedeca303e1 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Working kubernetes.io/hostname=worker-1.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-67d75dc2ab02f295344703f157da14d7 machineconfiguration.openshift.io/desiredConfig: rendered-worker-67d75dc2ab02f295344703f157da14d7 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
$ oc get mcp NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-9b49007a4d027c9d7f40cfd3c485e31f True False False 3 3 3 0 46m worker rendered-worker-e17471318b8b3a61fc931cedeca303e1 True False False 2 2 2 0 46m $ oc describe node | egrep "hostname|machineconfig" kubernetes.io/hostname=master-0.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/desiredConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=master-1.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/desiredConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=master-2.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/desiredConfig: rendered-master-9b49007a4d027c9d7f40cfd3c485e31f machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=worker-0.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-e17471318b8b3a61fc931cedeca303e1 machineconfiguration.openshift.io/desiredConfig: rendered-worker-e17471318b8b3a61fc931cedeca303e1 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done kubernetes.io/hostname=worker-1.n6s2d.dynamic.opentlc.com machineconfiguration.openshift.io/controlPlaneTopology: HighlyAvailable machineconfiguration.openshift.io/currentConfig: rendered-worker-e17471318b8b3a61fc931cedeca303e1 machineconfiguration.openshift.io/desiredConfig: rendered-worker-e17471318b8b3a61fc931cedeca303e1 machineconfiguration.openshift.io/reason: machineconfiguration.openshift.io/state: Done
$ oc get machineconfig rendered-master-9b49007a4d027c9d7f40cfd3c485e31f -o yaml | grep ExecStart | grep OVNKubernetes
ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes $ oc get machineconfig rendered-worker-e17471318b8b3a61fc931cedeca303e1 -o yaml | grep ExecStart | grep OVNKubernetes ExecStart=/usr/local/bin/configure-ovs.sh OVNKubernetes
$ oc patch Network.config.openshift.io cluster --type='merge' --patch '{ "spec": { "networkType": "OVNKubernetes" } }' network.config.openshift.io/cluster patched
$ oc -n openshift-multus rollout status daemonset/multus Waiting for daemon set "multus" rollout to finish: 1 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 1 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 1 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 2 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 2 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 2 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 3 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 3 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 3 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 4 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 4 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 4 out of 5 new pods have been updated... Waiting for daemon set "multus" rollout to finish: 4 of 5 updated pods are available... daemon set "multus" successfully rolled out
$ cat << EOF > ~/reboot-nodes.sh #!/bin/bash for ip in $(oc get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}') do echo "reboot node $ip" ssh -o StrictHostKeyChecking=no core@\$ip sudo shutdown -r -t 3 done EOF
$ ~/reboot-nodes.sh reboot node 10.20.0.100 Warning: Permanently added '10.20.0.100' (ECDSA) to the list of known hosts. Shutdown scheduled for Fri 2021-06-04 13:44:07 UTC, use 'shutdown -c' to cancel. reboot node 10.20.0.101 Warning: Permanently added '10.20.0.101' (ECDSA) to the list of known hosts. Shutdown scheduled for Fri 2021-06-04 13:44:08 UTC, use 'shutdown -c' to cancel. reboot node 10.20.0.102 Warning: Permanently added '10.20.0.102' (ECDSA) to the list of known hosts. Shutdown scheduled for Fri 2021-06-04 13:44:08 UTC, use 'shutdown -c' to cancel. reboot node 10.20.0.200 Warning: Permanently added '10.20.0.200' (ECDSA) to the list of known hosts. Shutdown scheduled for Fri 2021-06-04 13:44:09 UTC, use 'shutdown -c' to cancel. reboot node 10.20.0.201 Warning: Permanently added '10.20.0.201' (ECDSA) to the list of known hosts. Shutdown scheduled for Fri 2021-06-04 13:44:10 UTC, use 'shutdown -c' to cancel.
$ oc get nodes NAME STATUS ROLES AGE VERSION master-0.n6s2d.dynamic.opentlc.com Ready master 64m v1.21.0-rc.0+4b2b6ff master-1.n6s2d.dynamic.opentlc.com Ready master 64m v1.21.0-rc.0+4b2b6ff master-2.n6s2d.dynamic.opentlc.com Ready master 64m v1.21.0-rc.0+4b2b6ff worker-0.n6s2d.dynamic.opentlc.com Ready worker 46m v1.21.0-rc.0+4b2b6ff worker-1.n6s2d.dynamic.opentlc.com Ready worker 41m v1.21.0-rc.0+4b2b6ff
$ oc get network.config/cluster -o jsonpath='{.status.networkType}{"\n"}' OVNKubernetes
$ oc get pods --all-namespaces -o wide --sort-by='{.spec.nodeName}' | egrep -v "Running|Completed" NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
$ oc get co NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE authentication 4.8.0-fc.7 True False False 4m36s baremetal 4.8.0-fc.7 True False False 63m cloud-credential 4.8.0-fc.7 True False False 76m cluster-autoscaler 4.8.0-fc.7 True False False 63m config-operator 4.8.0-fc.7 True False False 64m console 4.8.0-fc.7 True False False 4m21s csi-snapshot-controller 4.8.0-fc.7 True False False 21m dns 4.8.0-fc.7 True False False 63m etcd 4.8.0-fc.7 True False False 62m image-registry 4.8.0-fc.7 True False False 59m ingress 4.8.0-fc.7 True False False 4m5s insights 4.8.0-fc.7 True False False 57m kube-apiserver 4.8.0-fc.7 True False False 61m kube-controller-manager 4.8.0-fc.7 True False False 60m kube-scheduler 4.8.0-fc.7 True False False 61m kube-storage-version-migrator 4.8.0-fc.7 True False False 27m machine-api 4.8.0-fc.7 True False False 58m machine-approver 4.8.0-fc.7 True False False 63m machine-config 4.8.0-fc.7 True False False 63m marketplace 4.8.0-fc.7 True False False 63m monitoring 4.8.0-fc.7 True False False 4m6s network 4.8.0-fc.7 True False False 64m node-tuning 4.8.0-fc.7 True False False 63m openshift-apiserver 4.8.0-fc.7 True False False 4m40s openshift-controller-manager 4.8.0-fc.7 True False False 62m openshift-samples 4.8.0-fc.7 True False False 59m operator-lifecycle-manager 4.8.0-fc.7 True False False 63m operator-lifecycle-manager-catalog 4.8.0-fc.7 True False False 63m operator-lifecycle-manager-packageserver 4.8.0-fc.7 True False False 4m30s service-ca 4.8.0-fc.7 True False False 64m storage 4.8.0-fc.7 True False False 64m
$ oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ "spec": { "migration": null } }' network.operator.openshift.io/cluster patched
$ oc patch Network.operator.openshift.io cluster --type='merge' --patch '{ "spec": { "defaultNetwork": { "openshiftSDNConfig": null } } }' network.operator.openshift.io/cluster patched (no change)
$ oc delete namespace openshift-sdn namespace "openshift-sdn" deleted
$ oc get pods -n openshift-ovn-kubernetes NAME READY STATUS RESTARTS AGE ovnkube-master-5v8ch 6/6 Running 14 50m ovnkube-master-kmwkp 6/6 Running 6 50m ovnkube-master-zlgnv 6/6 Running 14 50m ovnkube-node-7vrmq 4/4 Running 4 50m ovnkube-node-kz4l9 4/4 Running 4 50m ovnkube-node-nhbdz 4/4 Running 5 50m ovnkube-node-nwnnk 4/4 Running 5 50m ovnkube-node-t27gb 4/4 Running 5 50m