Tuesday, January 04, 2022

Using VMware for OpenShift BM IPI Provisioning


Anyone who has looked at the installation requirements for an OpenShift Baremetal IPI installation knows that a provisioning node is required for the deployment process.   This node could potentially be another physical server or a virtual machine, either way though it needs to be a node running Red Hat Enterprise Linux 8.   The most common example is where a customer would just use one of their clusters physical nodes, install RHEL 8 on it, deploy OpenShift and then reincorporate that node into the newly built cluster as a worker.   I myself have personally used a provisioning node that is virtualized on kvm/libvirt with RHEL 8 host.  In this example the deployment process, specifically the bootstrap virtual machine, is then nested.   With that said though I am seeing a lot of requests from customers that want to leverage a virtual machine in VMware to handle the provisioning duties, especially since after the provisioning process, there really is not a need to keep that node around. 

While it is entirely possible to use a VMware virtual machine as the provisioning node there are some specific things that need to be configured to ensure that the nested bootstrap virtual machine can launch properly and obtain the correct networking to function and deploy the OpenShift cluster.  The following attempts to highlight those requirements without providing a step by step installation guide since I have written about the OpenShift BM IPI process many times before.

First lets quickly take a look at the architecture of the provisioning virtual machine on VMware.  The following figure show a simple ESXi 7.x host (Intel NUC) with a single interface into it that has multiple trunked vlans from a Cisco 3750.

From the Cisco 3750 we can see the switch port is configured to allow the trunking of the two vlans we will need to be present on the provisioning virtual machine running on the ESXi hypervisor host.   The first vlan is vlan 40 which is the provisioning network used for PXE booting the cluster nodes.  Note that this vlan needs to also be our native vlan because PXE does not know about vlan tags.   The second vlan is vlan 10 which provides access for the baremetal network and for this one it can be tagged as such.  Other vlans are trunked to these ports but they are not needed for this particular configuration and are only there for flexibility when I create virtual machines for other lab testing.

!
interface GigabitEthernet1/0/6
 switchport trunk encapsulation dot1q
 switchport trunk native vlan 40
 switchport trunk allowed vlan 10,20,30,40,50
 switchport mode trunk
 spanning-tree portfast trunk
!

Now lets login to the VMware console and look at our networks from the ESXi point of view.   Below we can see that I have three networks: VM Network, baremetal and Management Network.   The VM Network is my provisioning network or native vlan 0 in the diagram above and provides the PXE boot network required for BM IPI deployment when using PXE.  Its also the network that gives me access to this ESXi host.   The baremetal network is the vlan 10 network and will provide the baremetal access for the bootstrap VM when it runs nested in my provisioning node.


If we look at the baremetal network for example we can see that the security policies for promiscuous mode, forged transmits and MAC changes are all set to yes.   By default VMware has these set to no but they need to be enabled like I have in order for the bootstrap VM that will be run nested on our virtual provisioning node to get a baremetal ipaddress from DHCP.


To change this setting I just needed to edit the port group and select the accept radio buttons for those three options and then save it:


After the baremetal network has been configured correctly I went ahead and made the same changes to the VM Network which again is my provisioning network:


Now that I have made the required network configurations I can go ahead and create my provisioning node virtual machine in VMware.   However we need to make sure that the VM is created to pass the hardware virtualization through to the VM.  Doing so ensure we will be able to launch a bootstrap VM nested inside the provisioning node when we go to do the baremetal IPI deployment.   Below is a screenshot where that configuration setting needs to be made.  The fields for Hardware Virtualization and IOMMU need to be checked:


With the hardware virtualization enabled we can go ahead and install Red Hat Enterprise Linux 8 on the virtual machine just like we would for the baremetal IPI deployment requirements.

Once we have RHEL 8 installed we can further validate that the virtual machine in VMware is configured appropriately for us to run a nested VM inside by executing the following command:

$ virt-host-validate 
  QEMU: Checking for hardware virtualization                                 : PASS
  QEMU: Checking if device /dev/kvm exists                                   : PASS
  QEMU: Checking if device /dev/kvm is accessible                            : PASS
  QEMU: Checking if device /dev/vhost-net exists                             : PASS
  QEMU: Checking if device /dev/net/tun exists                               : PASS
  QEMU: Checking for cgroup 'cpu' controller support                         : PASS
  QEMU: Checking for cgroup 'cpuacct' controller support                     : PASS
  QEMU: Checking for cgroup 'cpuset' controller support                      : PASS
  QEMU: Checking for cgroup 'memory' controller support                      : PASS
  QEMU: Checking for cgroup 'devices' controller support                     : PASS
  QEMU: Checking for cgroup 'blkio' controller support                       : PASS
  QEMU: Checking for device assignment IOMMU support                         : WARN (No ACPI DMAR table found, IOMMU either disabled in BIOS or not supported by this hardware platform)
  QEMU: Checking for secure guest support                                    : WARN (Unknown if this platform has Secure Guest support)

If everything passing (the last two warning are okay) then one is ready to continue to do a baremetal IPI deployment using the virtual machine as a provisioning node in VMware.

No comments: