In my lab I use KVM virtual machines as my "baremetal" machines for testing OpenStack and Openshift. In both of those cases I need something that provides power management to power off/on the virtual machines during deployment phases. This is where Virtual BMC (vBMC) comes in as a handy tool to provide that functionality. However I really don't want to install vBMC on all of the physical hosts that were providing my virtual machines. Thankfully as this blog will explain there is a way to run vBMC where you can centrally manage all the virtual machines.
First lets pick a host that will be our centralized vBMC controller. This host could be a physical box or a virtual machine it does not matter. It does however need to have SSH key authentication to any of the KVM hypervisor hosts that contain virtual machines we wish to control with vBMC.
Once I have my vBMC host I will install the required package via rpm since I did not have a repo that contained the package. If you have a repo that does container the package I would suggest using yum install instead:
# rpm -ivh python2-virtualbmc-1.4.0-1.el7.noarch.rpm Preparing... ################################# [100%] Updating / installing... 1:python2-virtualbmc-1.4.0-1.el7 ################################# [100%]
Once the package is installed we should be able to run the following command to see the command line usage for vbmc when adding a host. If you get errors about cliff.app and zmq please install these packages (python2-cliff.noarch & python2-zmq.x86_64):
# vbmc add --help usage: vbmc add [-h] [--username USERNAME] [--password PASSWORD] [--port PORT] [--address ADDRESS] [--libvirt-uri LIBVIRT_URI] [--libvirt-sasl-username LIBVIRT_SASL_USERNAME] [--libvirt-sasl-password LIBVIRT_SASL_PASSWORD] domain_name Create a new BMC for a virtual machine instance positional arguments: domain_name The name of the virtual machine optional arguments: -h, --help show this help message and exit --username USERNAME The BMC username; defaults to "admin" --password PASSWORD The BMC password; defaults to "password" --port PORT Port to listen on; defaults to 623 --address ADDRESS The address to bind to (IPv4 and IPv6 are supported); defaults to :: --libvirt-uri LIBVIRT_URI The libvirt URI; defaults to "qemu:///system" --libvirt-sasl-username LIBVIRT_SASL_USERNAME The libvirt SASL username; defaults to None --libvirt-sasl-password LIBVIRT_SASL_PASSWORD The libvirt SASL password; defaults to None
Now lets try adding a virtual machine called kube-master located on a remote hypervisor host:
# vbmc add --username admin --password password --port 6230 --address 192.168.0.10 --libvirt-uri qemu+ssh://root@192.168.0.4/system kube-master
Now lets add a second virtual machine on a different hypervisor and notice I increment the port number in use as this is the unique port number that gets called when using ipmi to actually connection to the specific host we wish to power on/off or get a power status from:
# vbmc add --username admin --password password --port 6231 --address 192.168.0.10 --libvirt-uri qemu+ssh://root@192.168.0.5/system cube-vm1
Now lets start the vbmc process for them and confirm they are up and running:
# vbmc start kube-master 2019-06-17 08:48:05,649.649 6915 INFO VirtualBMC [-] Started vBMC instance for domain kube-master # vbmc start cube-vm1 2019-06-17 14:49:39,491.491 6915 INFO VirtualBMC [-] Started vBMC instance for domain cube-vm1
# vbmc list
+-------------+---------+--------------+------+ | Domain name | Status | Address | Port | +-------------+---------+--------------+------+ | cube-vm1 | running | 192.168.0.10 | 6231 | | kube-master | running | 192.168.0.10 | 6230 | +-------------+---------+--------------+------+
Now that we have added a few virtual machines lets validate that things are working by trying to power the hosts up and get a status. In this example we will check the power status of kube-master and power on if it is off:
# ipmitool -I lanplus -H192.168.0.10 -p6230 -Uadmin -Ppassword chassis status System Power : off Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : always-off Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # ipmitool -I lanplus -H192.168.0.10 -p6230 -Uadmin -Ppassword chassis power on Chassis Power Control: Up/On # ipmitool -I lanplus -H192.168.0.10 -p6230 -Uadmin -Ppassword chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : always-off Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false
In the next example we will see that cube-vm1 is powered on and we should power it off:
# ipmitool -I lanplus -H192.168.0.10 -p6231 -Uadmin -Ppassword chassis status System Power : on Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : always-off Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false # ipmitool -I lanplus -H192.168.0.10 -p6231 -Uadmin -Ppassword chassis power off Chassis Power Control: Down/Off # ipmitool -I lanplus -H192.168.0.10 -p6231 -Uadmin -Ppassword chassis status System Power : off Power Overload : false Power Interlock : inactive Main Power Fault : false Power Control Fault : false Power Restore Policy : always-off Last Power Event : Chassis Intrusion : inactive Front-Panel Lockout : inactive Drive Fault : false Cooling/Fan Fault : false
Lets summarize what we just did. We had a vBMC host that was ipaddress 192.168.0.10 where we installed vBMC and configured two different virtual machines kube-master and cube-vm1 which were on two completely different hypervisor guests, ip address 192.168.0.4 and 192.168.0.5 respectively. This allowed us to remotely power manage those virtual machines without the need to install any additional software on those hypervisor hosts.
Given this flexibility one could foresee maybe in the future have a centalized vBMC container that could then in turn access any KubeVirt deployed virtual machines that are deployed within that Kubernetes cluster. I guess its only a matter of time.