Sunday, October 30, 2022

Walk Open Ports in OpenShift Pods

I was recently working with a customer who had the requirement to see what ports were in use inside a few of their OpenShift containers. This led me to produce a little script that allows me to walk all the ports in use across a single namespace/pod, all pods in a given namespace or across the entire cluster. Let's take a quick look at some examples of its usage in the rest of this short blog.

First let's demonstrate how to walk just a single namespace and pod. In this example I will use the openshift-storage namespace and the rook-ceph-operator container.

$ ./oc-ports.sh -n openshift-storage -p rook-ceph-operator-85d47cf975-l69r4 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.1.31 52558 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47746 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47768 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41142 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47742 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41174 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47796 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47150 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47800 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 59690 172.30.51.97 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47750 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41158 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52586 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 59816 172.30.244.234 443 786832572 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 37108 172.30.164.108 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47138 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41096 172.30.244.234 443 816946818 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47752 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41154 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52568 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 55324 172.30.0.1 443 789789504 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47782 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52576 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4

For our next test let's just provide a namespace and let the script enumerate through all the pods. The output from this is quite lengthy so I will truncate most of it.

$ ./oc-ports.sh -n openshift-machine-api LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 127.0.0.1 9191 0.0.0.0 0 205334 TCP_LISTEN openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 127.0.0.1 9191 127.0.0.1 59090 253479495 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 10.129.0.51 41020 172.30.0.1 443 653600899 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 127.0.0.1 59090 127.0.0.1 9191 253497374 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 10.129.0.51 59338 172.30.0.1 443 669220775 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.31 54666 172.30.0.1 443 789856883 TCP_ESTABLISHED openshift-machine-api cluster-baremetal-operator-64f9997468-sj5xh LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.43 51300 172.30.0.1 443 548661167 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46504 172.30.0.1 443 535870258 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 35602 172.30.0.1 443 548638559 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46526 172.30.0.1 443 535879071 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46552 172.30.0.1 443 535882997 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46520 172.30.0.1 443 535868211 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 127.0.0.1 8080 0.0.0.0 0 308875 TCP_LISTEN openshift-machine-api machine-api-operator-8595794ccc-lvdd5 127.0.0.1 41508 127.0.0.1 8080 308168 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 127.0.0.1 8080 127.0.0.1 41508 329823 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 10.128.0.41 39462 172.30.0.1 443 789799809 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 10.128.0.41 54512 172.30.0.1 443 816812059 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 0.0.0.0 31815 0.0.0.0 0 632366 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 127.0.0.1 10248 0.0.0.0 0 15033 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 0.0.0.0 31625 0.0.0.0 0 612309 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 192.168.0.111 10250 0.0.0.0 0 29659 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 127.0.0.1 6060 0.0.0.0 0 55200 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 192.168.0.111 9100 0.0.0.0 0 69903 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 (...) 127.0.0.1 80 127.0.0.1 35180 0 TCP_TIME_WAIT openshift-machine-api metal3-image-cache-tqrdq 192.168.0.110 51874 192.168.0.112 2379 653534755 TCP_ESTABLISHED openshift-machine-api metal3-image-cache-tqrdq 10.129.0.1 54378 172.30.0.1 443 653518566 TCP_ESTABLISHED openshift-machine-api metal3-image-cache-tqrdq LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.58 36866 172.30.0.1 443 653597918 TCP_ESTABLISHED openshift-machine-api metal3-image-customization-5c85d5f5f8-lbslg

Finally let's just run the command with the all option which will be even more output then our previous commands. For troubleshooting one could redirect the output to a file if needed. I went ahead and broke out of the run after a bit but the output gives one an idea of what they might see.

$ ./oc-ports.sh -a No resources found in default namespace. No resources found in kni22 namespace. No resources found in kube-node-lease namespace. No resources found in kube-public namespace. No resources found in kube-system namespace. LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.31 34770 172.30.0.1 443 535870264 TCP_ESTABLISHED open-cluster-management-agent klusterlet-5bb4b4f75c-7t9pr 10.130.0.31 56326 192.168.0.220 6443 548763858 TCP_ESTABLISHED open-cluster-management-agent klusterlet-5bb4b4f75c-7t9pr LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.2 35954 172.30.0.1 443 653602954 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-7phlw LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.5 37114 172.30.0.1 443 789857695 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-n8nd9 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.32 55454 192.168.0.220 6443 543474785 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-rdsgw 10.130.0.32 59700 172.30.0.1 443 535874302 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-rdsgw LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.33 34538 172.30.0.1 443 653582074 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-2hpgx LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.33 33190 172.30.0.1 443 535807747 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-f8gwf 10.130.0.33 54940 192.168.0.220 6443 543910317 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-f8gwf LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.7 37226 172.30.0.1 443 789858781 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-wmcnf LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.36 35850 172.30.0.1 443 543588502 TCP_ESTABLISHED open-cluster-management-agent-addon application-manager-8f8589977-jhzd4 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.35 45864 172.30.0.1 443 538396535 TCP_ESTABLISHED open-cluster-management-agent-addon cert-policy-controller-fd4fd8d5d-vcxjh ^C

Hopefully this tool is useful in the future to anyone interested in connectivity among their pods in an OpenShift or Kubernetes cluster.

Saturday, October 22, 2022

Deploy Microshift on RHEL8 with Zigbee2MQTT Workload

Edge devices deployed in the field, whether its manufacturing, transportation, communication or space, pose very different operational business challenge from those of the data center and cloud computing. These motivate different engineering trade-offs for Kubernetes at the far edge than for cloud or near-edge scenarios. Enter Microshift whose goals are to address the following use cases:

  • Parsimonious use of system resources (CPU, memory, network, storage)
  • Toleration of bandwidth and latency networking constraints
  • Non-disruptive upgrades with rollback capabilities
  • Build and integrate with edge OSes like Fedora IoT and RHEL for Edge
  • Implement a consistent development and management experience on par with OpenShift

Let's explore this potential edge use case savior as we dive into a real world example of deploying Microshift and a functional workload in this blog.

Lab Environment

Before we start let't quickly review the lab environment we will be using for this demonstration. The lab device itself is a virtual machine with the following properties:

  • KVM Virtual Machine
  • RHEL 8.6 Installed
  • 4 vCpu
  • 8GB of Memory
  • 2 x 120GB disk
  • Zigbee 3.0 USB Dongle Plus E

When completed with the deployment of Microshift and our Zigbee2MQTT workload we will end up with an environment that looks like the following diagram:

Dependencies, Build Microshift & Installation

Now that we have had a review of the lab environment and a diagram of what the finished demonstration will look like let's go ahead and start building the environment out. To begin we need to install some dependency packages onto the Red Hat Enterprise Linux 8.6 host.

$ sudo dnf install -y git cockpit make golang selinux-policy-devel rpm-build bash-completion

Once the dependencies have been installed we can then clone down the Microshift repository from git.

$ git clone https://github.com/openshift/microshift.git ~/microshift Cloning into '/home/bschmaus/microshift'... remote: Enumerating objects: 47728, done. remote: Counting objects: 100% (766/766), done. remote: Compressing objects: 100% (264/264), done. remote: Total 47728 (delta 595), reused 538 (delta 502), pack-reused 46962 Receiving objects: 100% (47728/47728), 46.05 MiB | 10.85 MiB/s, done. Resolving deltas: 100% (24411/24411), done. Updating files: 100% (13866/13866), done.

Next let's change into the microshift directory.  From here we can issue a make rpm command to build the necessary Microshift rpms.

$ cd microshift/ $ make rpm fatal: No names found, cannot describe anything. BUILD=rpm \ SOURCE_GIT_COMMIT=60b605d2 \ SOURCE_GIT_TREE_STATE=clean RELEASE_BASE=4.12.0 \ RELEASE_PRE=4.12.0-0.microshift ./packaging/rpm/make-rpm.sh local # Creating local tarball tar: Removing leading `/home/bschmaus/microshift/packaging/rpm/../..' from member names tar: Removing leading `/home/bschmaus/microshift/packaging/rpm/../../' from member names # Building RPM packages fatal: No names found, cannot describe anything. fatal: No names found, cannot describe anything. warning: Missing build-id in /home/bschmaus/microshift/_output/rpmbuild/BUILDROOT/microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64/usr/bin/microshift

Once the packages finish building we can move onto adding the rhocp-4.11 & fast-datapath repositories using subscription-manager. We need them because when we go to install Microshift it will pull in some additional dependencies from those repositories.

$ sudo subscription-manager repos --enable rhocp-4.11-for-rhel-8-$(uname -i)-rpms --enable fast-datapath-for-rhel-8-$(uname -i)-rpms Repository 'rhocp-4.11-for-rhel-8-x86_64-rpms' is enabled for this system. Repository 'fast-datapath-for-rhel-8-x86_64-rpms' is enabled for this system.

Now we can initiate a local install pointing to the directory where the Microshift rpms were built.

$ sudo dnf localinstall -y ~/microshift/_output/rpmbuild/RPMS/*/*.rpm Updating Subscription Management repositories. Red Hat OpenShift Container Platform 4.11 for RHEL 8 x86_64 (RPMs) 183 kB/s | 122 kB 00:00 Fast Datapath for RHEL 8 x86_64 (RPMs) 938 kB/s | 486 kB 00:00 Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Installing: microshift x86_64 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 25 M microshift-networking x86_64 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 20 k microshift-selinux noarch 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 20 k Installing dependencies: NetworkManager-ovs x86_64 1:1.36.0-7.el8_6 rhel-8-for-x86_64-baseos-rpms 171 k conntrack-tools x86_64 1.4.4-10.el8 rhel-8-for-x86_64-baseos-rpms 204 k cri-o x86_64 1.24.2-9.rhaos4.11.gitac6f687.el8 rhocp-4.11-for-rhel-8-x86_64-rpms 24 M cri-tools x86_64 1.24.2-6.el8 rhocp-4.11-for-rhel-8-x86_64-rpms 6.3 M libnetfilter_cthelper x86_64 1.0.0-15.el8 rhel-8-for-x86_64-baseos-rpms 24 k libnetfilter_cttimeout x86_64 1.0.0-11.el8 rhel-8-for-x86_64-baseos-rpms 24 k libnetfilter_queue x86_64 1.0.4-3.el8 rhel-8-for-x86_64-baseos-rpms 31 k openvswitch-selinux-extra-policy noarch 1.0-29.el8fdp fast-datapath-for-rhel-8-x86_64-rpms 16 k openvswitch2.17 x86_64 2.17.0-50.el8fdp fast-datapath-for-rhel-8-x86_64-rpms 17 M Transaction Summary =================================================================================================================================================================================================================== Install 12 Packages Total size: 72 M Total download size: 47 M Installed size: 300 M Downloading Packages: (1/9): openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch.rpm 59 kB/s | 16 kB 00:00 (2/9): openvswitch2.17-2.17.0-50.el8fdp.x86_64.rpm 6.4 MB/s | 17 MB 00:02 (3/9): cri-tools-1.24.2-6.el8.x86_64.rpm 2.2 MB/s | 6.3 MB 00:02 (4/9): libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm 155 kB/s | 24 kB 00:00 (5/9): conntrack-tools-1.4.4-10.el8.x86_64.rpm 1.3 MB/s | 204 kB 00:00 (6/9): libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm 238 kB/s | 24 kB 00:00 (7/9): libnetfilter_queue-1.0.4-3.el8.x86_64.rpm 197 kB/s | 31 kB 00:00 (8/9): NetworkManager-ovs-1.36.0-7.el8_6.x86_64.rpm 1.3 MB/s | 171 kB 00:00 (9/9): cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64.rpm 4.5 MB/s | 24 MB 00:05 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.9 MB/s | 47 MB 00:05 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 1/12 Running scriptlet: microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 1/12 Installing : NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 2/12 Installing : libnetfilter_queue-1.0.4-3.el8.x86_64 3/12 Running scriptlet: libnetfilter_queue-1.0.4-3.el8.x86_64 3/12 Installing : libnetfilter_cthelper-1.0.0-15.el8.x86_64 4/12 Running scriptlet: libnetfilter_cthelper-1.0.0-15.el8.x86_64 4/12 Installing : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Installing : conntrack-tools-1.4.4-10.el8.x86_64 6/12 Running scriptlet: conntrack-tools-1.4.4-10.el8.x86_64 6/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Installing : openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Running scriptlet: openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Installing : openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Running scriptlet: openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Installing : microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 9/12 Running scriptlet: microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 9/12 Warning: The unit file, source configuration file or drop-ins of NetworkManager.service changed on disk. Run 'systemctl daemon-reload' to reload units. Installing : cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 10/12 Running scriptlet: cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 10/12 Installing : cri-tools-1.24.2-6.el8.x86_64 11/12 Installing : microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Running scriptlet: microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Running scriptlet: microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 12/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 12/12 Running scriptlet: microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Verifying : cri-tools-1.24.2-6.el8.x86_64 1/12 Verifying : cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 2/12 Verifying : openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 3/12 Verifying : openvswitch2.17-2.17.0-50.el8fdp.x86_64 4/12 Verifying : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Verifying : conntrack-tools-1.4.4-10.el8.x86_64 6/12 Verifying : libnetfilter_cthelper-1.0.0-15.el8.x86_64 7/12 Verifying : libnetfilter_queue-1.0.4-3.el8.x86_64 8/12 Verifying : NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 9/12 Verifying : microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 10/12 Verifying : microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 11/12 Verifying : microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Installed products updated. Installed: NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 conntrack-tools-1.4.4-10.el8.x86_64 cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 cri-tools-1.24.2-6.el8.x86_64 libnetfilter_cthelper-1.0.0-15.el8.x86_64 libnetfilter_cttimeout-1.0.0-11.el8.x86_64 libnetfilter_queue-1.0.4-3.el8.x86_64 microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch openvswitch2.17-2.17.0-50.el8fdp.x86_64 Complete!

Before we can start Microshift we need to configure a few more items. One of those are the firewall rules.

$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 success $ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1 success $ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=5353/udp success $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/udp success $ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp success $ sudo firewall-cmd --reload success

After the firewall rules we need to ensure we have a rhel volume group created because TopoLVM which comes as part of Microshift will be, by default, using a volume group called rhel. On our system I actually have a second disk sdb that I will be using to create that volume group.

$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 120G 0 disk ├─sda1 8:1 0 600M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 118.4G 0 part ├─rhel_sno3-root 253:0 0 70G 0 lvm / ├─rhel_sno3-swap 253:1 0 9.6G 0 lvm [SWAP] └─rhel_sno3-home 253:2 0 38.8G 0 lvm /home sdb 8:16 0 120G 0 disk sr0 11:0 1 10.7G 0 rom

We can generate the volume group by using the vgcreate command.

$ sudo vgcreate rhel /dev/sdb Physical volume "/dev/sdb" successfully created. Volume group "rhel" successfully created

We can then validate the volume group was created with the vgs command.

$ sudo vgs VG #PV #LV #SN Attr VSize VFree rhel 1 0 0 wz--n- <120.00g <120.00g rhel_sno3 1 3 0 wz--n- <118.43g 0

Another configuration item we need is to set our pull-secret keys in the file openshift-pull-secret under the /etc/crio directory.

$ sudo vi /etc/crio/openshift-pull-secret

And finally since we will be running oc & kubectl commands against Microshift we need to install the openshift-clients.

$ sudo dnf install -y openshift-clients

We have now reached the point where we can enable and start both crio and Microshift.

$ sudo systemctl enable crio --now Created symlink /etc/systemd/system/cri-o.service → /usr/lib/systemd/system/crio.service. Created symlink /etc/systemd/system/multi-user.target.wants/crio.service → /usr/lib/systemd/system/crio.service. $ sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.

Once the services have been started let's go ahead and create a hidden directory called .kube and copy the kubeconfig in there.

$ mkdir ~/.kube $ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

In a few minutes we can then issue a oc get pods -A and hopefully we see the following pods running.

$ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE openshift-dns dns-default-h58l2 2/2 Running 0 34m openshift-dns node-resolver-nj296 1/1 Running 0 34m openshift-ingress router-default-559ff9d676-vvzjs 1/1 Running 0 34m openshift-ovn-kubernetes ovnkube-master-s74f5 4/4 Running 0 34m openshift-ovn-kubernetes ovnkube-node-dtbbv 1/1 Running 0 34m openshift-service-ca service-ca-5f9bc879d8-dnxcj 1/1 Running 0 34m openshift-storage topolvm-controller-5cbd9d9684-zs7v6 4/4 Running 0 34m openshift-storage topolvm-node-779zz 4/4 Running 0 34m

If everything looks correct in the last step we can proceed to deploying our workloads.

Deployment Workloads

For my workload I am choosing to run Zigbee2MQTT which allows one to control home smart devices via the Zigbee protocol. The Zigbee protocol is use in such applications as:

  • Home automation
  • Wireless sensor networks
  • Industrial control systems
  • Embedded sensing
  • Medical data collection
  • Smoke and intruder warning
  • Building automation
  • Remote wireless microphone configuration

Zigbee2MQTT also has a few dependencies we need to install as well: Ser2sock and Mosquitto. Let's go ahead and get started by creating a namespace called zigbee.

$ oc create namespace zigbee namespace/zigbee created

Once the namespace is created we can go ahead and create a service and deployment file for the Ser2sock application. Ser2sock allows a serial device to communicate over TCP/IP and will enable us to communicate with the Sonoff Zigbee dongle.

$ cat << EOF > ser2sock-service.yaml apiVersion: v1 kind: Service metadata: name: ser2sock namespace: zigbee spec: ports: - name: server port: 10000 protocol: TCP targetPort: server selector: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock sessionAffinity: None type: ClusterIP EOF $ cat << EOF > ser2sock-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ser2sock namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock spec: automountServiceAccountToken: true containers: - env: - name: BAUD_RATE value: "115200" - name: LISTENER_PORT value: "10000" - name: SERIAL_DEVICE value: /dev/ttyACM0 - name: TZ value: UTC image: tenstartups/ser2sock:latest imagePullPolicy: Always livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 name: ser2sock ports: - containerPort: 10000 name: server protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 EOF

With the custom resource files generated let's create them against the Microshift environment.

$ oc create -f ser2sock-service.yaml service/ser2sock created $ oc create -f ser2sock-deployment.yaml deployment.apps/ser2sock created

We can validate the service was created and also validate the pod is running correctly by checking the initial logs of the pod.

$ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE ser2sock-79db88f44c-lfg74 1/1 Running 0 21s $ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 41s $ oc logs ser2sock-79db88f44c-lfg74 -n zigbee [✔] Serial 2 Socket Relay version V1.5.5 starting [✔] Listening socket created on port 10000 [✔] Start wait loop using ser2sock communication mode [✔] Opened com port at /dev/ttyACM0 [✔] Setting speed 115200 [✔] Set speed successful

With Ser2sock up and running we can proceed to configure and install Mosquitto. Mosquitto is a message broker that implements the MQTT protocol and enables a lightweight method of carrying out messaging using a publish/subscribe model. We need to create three custom resource files here: a configmap, a service and a deployment.

$ cat << EOF > mosquitto-configmap.yaml apiVersion: v1 data: mosquitto.conf: | per_listener_settings false listener 1883 allow_anonymous true kind: ConfigMap metadata: name: mosquitto-config namespace: zigbee EOF $ cat << EOF > mosquitto-service.yaml apiVersion: v1 kind: Service metadata: name: mosquitto namespace: zigbee spec: ports: - name: mqtt port: 1883 protocol: TCP targetPort: mqtt selector: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto sessionAffinity: None type: ClusterIP EOF $ cat << EOF > mosquitto-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mosquitto namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto spec: automountServiceAccountToken: true containers: - image: eclipse-mosquitto:2.0.14 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 name: mosquitto ports: - containerPort: 1883 name: mqtt protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mosquitto/config/mosquitto.conf name: mosquitto-config subPath: mosquitto.conf dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: mosquitto-config name: mosquitto-config EOF

With our Mosquitto customer resource files created we can now apply them to the Microshift environment.

$ oc create -f mosquitto-configmap.yaml configmap/mosquitto-config created $ oc create -f mosquitto-service.yaml service/mosquitto created $ oc create -f mosquitto-deployment.yaml deployment.apps/mosquitto created

Again we can validate that everything is running correctly by confirming the pod is running, the service was created and by examaning the logs files of the pod.

$ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE mosquitto-84c6bfcd44-6qb74 1/1 Running 0 32s ser2sock-79db88f44c-lfg74 1/1 Running 0 1m50s $ oc create -f mosquitto-deployment.yaml deployment.apps/mosquitto created [bschmaus@sno3 ~]$ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mosquitto ClusterIP 10.43.72.122 <none> 1883/TCP 52s ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 2m3s $ oc logs mosquitto-84c6bfcd44-6qb74 -n zigbee chown: /mosquitto/config/mosquitto.conf: Read-only file system 1666223617: mosquitto version 2.0.14 starting 1666223617: Config loaded from /mosquitto/config/mosquitto.conf. 1666223617: Opening ipv4 listen socket on port 1883. 1666223617: Opening ipv6 listen socket on port 1883. 1666223617: mosquitto version 2.0.14 running

At this point we are now ready to install Zigbee2MQTT. The deployment will require four custom resource files: a configmap, a pvc, a service and a deployment file.

cat << EOF > zigbee2mqtt-configmap.yaml apiVersion: v1 data: configuration.yaml: | advanced: homeassistant_discovery_topic: homeassistant homeassistant_status_topic: homeassistant/status last_seen: ISO_8601 log_level: info log_output: - console network_key: GENERATE experimental: new_api: true frontend: port: 8080 homeassistant: true mqtt: base_topic: zigbee2mqtt include_device_information: true server: mqtt://mosquitto permit_join: true serial: adapter: ezsp port: tcp://ser2sock:10000 kind: ConfigMap metadata: name: zigbee2mqtt-settings namespace: zigbee EOF cat << EOF > zigbee2mqtt-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zigbee2mqtt-data namespace: zigbee spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: topolvm-provisioner volumeMode: Filesystem EOF cat << EOF > zigbee2mqtt-service.yaml apiVersion: v1 kind: Service metadata: name: zigbee2mqtt namespace: zigbee spec: ports: - name: http port: 8080 protocol: TCP targetPort: http selector: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt sessionAffinity: None type: ClusterIP EOF cat << EOF > zigbee2mqtt-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: zigbee2mqtt namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt spec: automountServiceAccountToken: true containers: - env: - name: ZIGBEE2MQTT_DATA value: /data image: koenkk/zigbee2mqtt:1.19.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 name: zigbee2mqtt ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data name: data - mountPath: /app/configuration.yaml name: zigbee2mqtt-settings subPath: configuration.yaml dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: data persistentVolumeClaim: claimName: zigbee2mqtt-data - configMap: defaultMode: 420 name: zigbee2mqtt-settings name: zigbee2mqtt-settings EOF

Once the custom resource files have been generated we can go ahead and create them on the Microshift environment.

$ oc create -f zigbee2mqtt-configmap.yaml configmap/zigbee2mqtt-settings created $ oc create -f zigbee2mqtt-pvc.yaml persistentvolumeclaim/zigbee2mqtt-data created $ oc create -f zigbee2mqtt-service.yaml service/zigbee2mqtt created $ oc create -f zigbee2mqtt-deployment.yaml deployment.apps/zigbee2mqtt created

We can also validate Zigbee2MQTT is working by validating the pvc, service, pod state and log files of the pod.

$ oc get pvc -n zigbee NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE zigbee2mqtt-data Bound pvc-0171efa4-49c2-42ce-9c6e-a3a730f61020 1Gi RWO topolvm-provisioner 22s $ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mosquitto ClusterIP 10.43.72.122 <none> 1883/TCP 3m17s ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 4m28s zigbee2mqtt ClusterIP 10.43.164.117 <none> 8080/TCP 14s $ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE mosquitto-84c6bfcd44-4lc88 1/1 Running 0 3m33s ser2sock-79db88f44c-9rvc2 1/1 Running 0 4m41s zigbee2mqtt-5c6bdcc6ff-flk54 1/1 Running 0 27s $ oc logs zigbee2mqtt-5c6bdcc6ff-flk54 -n zigbee Using '/data' as data directory Creating configuration file... Zigbee2MQTT:info 2022-10-20 19:46:54: Logging to console only' Zigbee2MQTT:info 2022-10-20 19:46:54: Starting Zigbee2MQTT version 1.19.1 (commit #9bd4693) Zigbee2MQTT:info 2022-10-20 19:46:54: Starting zigbee-herdsman (0.13.111) Assertion failed Assertion failed Assertion failed Zigbee2MQTT:info 2022-10-20 19:46:57: zigbee-herdsman started Zigbee2MQTT:info 2022-10-20 19:46:57: Coordinator firmware version: '{"meta":{"maintrel":"3 ","majorrel":"6","minorrel":"10","product":8,"revision":"6.10.3.0 build 297"},"type":"EZSP v8"}' Zigbee2MQTT:info 2022-10-20 19:46:57: Currently 0 devices are joined: Zigbee2MQTT:warn 2022-10-20 19:46:57: `permit_join` set to `true` in configuration.yaml. Zigbee2MQTT:warn 2022-10-20 19:46:57: Allowing new devices to join. Zigbee2MQTT:warn 2022-10-20 19:46:57: Set `permit_join` to `false` once you joined all devices. Zigbee2MQTT:info 2022-10-20 19:46:57: Zigbee: allowing new devices to join. Zigbee2MQTT:info 2022-10-20 19:46:57: Started frontend on port 0.0.0.0:8080 Zigbee2MQTT:info 2022-10-20 19:46:57: Connecting to MQTT server at mqtt://mosquitto Zigbee2MQTT:info 2022-10-20 19:46:57: Connected to MQTT server Zigbee2MQTT:info 2022-10-20 19:46:57: MQTT publish: topic 'zigbee2mqtt/bridge/state', payload 'online' Zigbee2MQTT:info 2022-10-20 19:46:57: MQTT publish: topic 'zigbee2mqtt/bridge/config', payload '{"commit":"9bd4693","coordinator":{"meta":{"maintrel":"3 ","majorrel":"6","minorrel":"10","product":8,"revision":"6.10.3.0 build 297"},"type":"EZSP v8"},"log_level":"info","network":{"channel":11,"extendedPanID":221,"panID":6754},"permit_join":true,"version":"1.19.1"}'

And if everything validated appropriately above we should be able to hit the web interface of Zigbee2MQTT.




Hopefully this gives anyone interested in running Microshift an idea of how it can be used. Considering if one is familar OpenShift, the API compatability of Microshift makes it rather easy to roll out applications on this light weight platform!