Wednesday, November 09, 2022

Monitoring Sensors and Taking Action

I recently wrote a blog around using Microshift to run my Zigbee2MQTT workload. This blog described all the details on how to deploy Microshift and then deploy the components inside of Microshift to enable some home automation. Of course with Zigbee2MQTT there is an intuitive web interface to interact with the smart devices. However I wanted to take another approach that felt more realistic when it comes to edge use cases. I felt that in a industrial scenario there would be some code that would most likely subscribed and monitoring the MQTT queue. An action would be performed when a certain event was observed and the action itself might publish something into the MQTT queue. The rest of this blog will cover a simple scenario like I just described.

First we continue to use the same lab environment I used in my previous blog. The only difference here in the diagram below is we have now added a smart power outlet and a temperature/humidity sensor that can both be controlled remotely via the Zigbee protocol like all my other devices.

The Script

With my lab in place I decided I wanted ot write something in Perl. Some might think why use such an antiquated language like Perl and part of that is because I am old school. For my scenario I envisioned using the humidity sensor to detect when the humidity levels got too high. The threshold would then trigger an action on the event to turn on/off a dehumidifier plugged into the smart outlet. The basic process flow looks like the following diagram:

The script itself can take four different parameters:

  • --hostname: hostname or IP address of MQTT host (required)
  • --port: port for MQTT (optional but will default to 1883 if not provided)
  • --threshold: humidity value that determines when action should be taken
  • --help: prints the usage of script

The script itself is located here

When one runs the script without any flags the usage and an example will be displayed.

./mqtt-humidity.pl Usage: --hostname,-h Hostname or IP address of MQTT host --port,-p Port for MQTT (defaults to default 1883) --threshold,-t Threshold for humidity (defaults to 60) --help,-h Print this help Example: mqtt-humidity.pl -ho 10.43.26.170 -p 1883 -t 65

The Demonstration of Script

To demonstrate this script I went ahead and plugged in a light into my smart outlet which was in the off setting. I launched the script in a terminal window. Then I took the temperature/humidity sensor, cupped it in my hands and blew into my hands. The moisture in my breath is enough to temporarily raise the value. The script provides output so we can see the values changing and sure enough when I breathed into my hands with the sensor the value jumped to 81.37% which triggered the action event and turned on the light. I then set the sensor back on my desk and over the course of 5 minutes the value slowly receded. Once it dropped below the threshold value the light then turned back off. The output of my script run is below:

$ perl mqtt-humidity.pl -ho 10.43.26.170 -p 1883 -t 60 Temp C = 23.43 : Temp F = 74.174 : Humidity = 51.22 Temp C = 23.43 : Temp F = 74.174 : Humidity = 81.37 <-- Smart outlet turned on Temp C = 23.43 : Temp F = 74.174 : Humidity = 84.37 Temp C = 23.43 : Temp F = 74.174 : Humidity = 82.37 Temp C = 23.43 : Temp F = 74.174 : Humidity = 80.28 Temp C = 23.43 : Temp F = 74.174 : Humidity = 78.79 Temp C = 23.63 : Temp F = 74.534 : Humidity = 78.79 Temp C = 23.63 : Temp F = 74.534 : Humidity = 73.21 Temp C = 23.63 : Temp F = 74.534 : Humidity = 74.34 Temp C = 23.63 : Temp F = 74.534 : Humidity = 72.91 Temp C = 23.63 : Temp F = 74.534 : Humidity = 71.65 Temp C = 23.63 : Temp F = 74.534 : Humidity = 70.55 Temp C = 23.63 : Temp F = 74.534 : Humidity = 69.15 Temp C = 23.63 : Temp F = 74.534 : Humidity = 67.93 Temp C = 23.63 : Temp F = 74.534 : Humidity = 66.57 Temp C = 23.63 : Temp F = 74.534 : Humidity = 64.87 Temp C = 23.63 : Temp F = 74.534 : Humidity = 63.28 Temp C = 23.63 : Temp F = 74.534 : Humidity = 62.08 Temp C = 23.63 : Temp F = 74.534 : Humidity = 60.71 Temp C = 23.63 : Temp F = 74.534 : Humidity = 59.12 <-- Smart outlet turned off Temp C = 23.63 : Temp F = 74.534 : Humidity = 57.72 Temp C = 23.63 : Temp F = 74.534 : Humidity = 56.7 Temp C = 23.63 : Temp F = 74.534 : Humidity = 55.6 Temp C = 23.63 : Temp F = 74.534 : Humidity = 54.23 Temp C = 23.63 : Temp F = 74.534 : Humidity = 53.21 Temp C = 23.63 : Temp F = 74.534 : Humidity = 52.14 Temp C = 23.63 : Temp F = 74.534 : Humidity = 51.05 Temp C = 23.63 : Temp F = 74.534 : Humidity = 50.04 Temp C = 23.22 : Temp F = 73.796 : Humidity = 50.04 Temp C = 23.22 : Temp F = 73.796 : Humidity = 50.04 ^C

Now this was a very simple example but imagine the possibilities. For example what if this was a greenhouse that needed to keep the humidity and/or even the temperature at a certain range. If the device that reduces the humidity/temperature (dehumidifier -or- exhaust fan) in the greenhouse could take Zigbee commands directly and control the speed of operation we might be able to not only turn it on/off but also increase/decrease speed of operation. All of this ensures that whatever was growing in the greenhouse is not damaged and also ensures we are powering devices only when we need to have them powered. The bottom line is it saves businesses like the greenhouse operational costs when they are operating efficiently.

Sunday, November 06, 2022

Microshift, RHEL9 & Apple M1 Virtual Machine


I previously had written a blog around Microshift running on a virtual machine with Fedora 35 on a MacBook Pro with an M1 processor. However that blog was using Fedora and an older version of Microshift based on the 4.8 release of OpenShift. However in this blog I want to demonstrate running the virtual machine with Red Hat Enterprise Linux 9 and Microshift based on the future 4.12 OpenShift release.

Lab Environment

The following lab environment was created in order to provide this demonstration, which includes the following:

  • MacBook Pro
    • M1 Max Processor
    • 32GB of memory
    • 1.8TB SSD
  • MacOS Ventura
  • UTM Virtualization UI 4.1.0
  • 1 Virtual Machine using Apple Virtualization Framework
    • 4 cores of vCPU
    • 8GB memory
    • 256GB disk
    • Red Hat Enterprise Linux 9
    • Static ip address configured

I have already documented how to install Red Hat Enterprise Linux 9 on an M1 virtual machine and the video for it can be found here.

Microshift Enhancements:

  • OVN replaces Flannel as CNI
  • TopoLVM replaces HostPathProvisioning as CSI storage backend

Build->Deploy->Run Microshift

To get started lets ensure we have the right repositories enabled on our Red Hat Enterprise Linux 9 virtual machine. We will go ahead and register the system, disable all repositories and then enable the repositories we will need.

$ sudo subscription-manager register $ sudo subscription-manager repos --disable=* $ sudo subscription-manager repos --enable=rhel-9-for-aarch64-baseos-rpms --enable=rhel-9-for-aarch64-appstream-rpms --enable=rhel-9-for-aarch64-supplementary-rpms --enable=fast-datapath-for-rhel-9-aarch64-rpms Repository 'rhel-9-for-aarch64-baseos-rpms' is enabled for this system. Repository 'rhel-9-for-aarch64-appstream-rpms' is enabled for this system. Repository 'rhel-9-for-aarch64-supplementary-rpms' is enabled for this system. Repository 'fast-datapath-for-rhel-9-aarch64-rpms' is enabled for this system.

Now let's install some of the pre-requisite packages we will need. Notice we are not installing Golang here and that is because we need a more recent version then what is shipping with Red Hat Enterprise Linux 9.

$ sudo dnf install -y git cockpit make selinux-policy-devel rpm-build bash-completion jq gcc

Now let's fetch Golang with wget and then extract it into /usr/local. We can also make a soft link from /usr/bin/go to the actual binary for convenience.

$ cd ~/ $ wget https://go.dev/dl/go1.19.3.linux-arm64.tar.gz $ sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.19.3.linux-arm64.tar.gz $ sudo ln -s /usr/local/go/bin/go /usr/bin/go

Next we can open up some firewall rules that are required for access when running Microshift.

$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 $ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1 $ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=5353/udp $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/udp $ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp $ sudo firewall-cmd --reload

We also need to manually grab the following packages from https://access.redhat.com as we could not find the Red Hat Enterprise Linux 9 equivalents in our repositories.

$ ls -1 *.el8.aarch64* cri-o-1.24.3-5.rhaos4.11.gitc4567c0.el8.aarch64.rpm cri-tools-1.24.2-6.el8.aarch64.rpm openshift-clients-4.11.0-202209201358.p0.g262ac9c.assembly.stream.el8.aarch64.rpm

Once the packages finish downloading we can install them.

$ sudo dnf localinstall cri-tools-1.24.2-6.el8.aarch64.rpm cri-o-1.24.3-5.rhaos4.11.gitc4567c0.el8.aarch64.rpm openshift-clients-4.11.0-202209201358.p0.g262ac9c.assembly.stream.el8.aarch64.rpm

Next we can go ahead and clone the GitHub repository for Microshift.

$ git clone https://github.com/openshift/microshift.git ~/microshift

Update the following release_arm64.go file to the following based off of this Github issue. Note these image location are not publicly accessible until the Arm version of Microshift images because readily available.

$ cp ~/microshift/pkg/release/release_arm64.go ~/microshift/pkg/release/release_arm64.go.bak $ cat << EOF > ~/microshift/pkg/release/release_arm64.go /* Copyright © 2021 MicroShift Contributors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package release // For the amd64 architecture we use the existing and tested and // published OCP or other component upstream images func init() { Image = map[string]string{ "cli": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe65a036a65af078f6f61017ae96e141dbb203f3602ecaca7f63ec8f58a1f6c6", "coredns": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5b3d024b2586bd0bf7b1315b2866f36a9b8b0acd23f0a9c6459371234dc8429", "haproxy_router": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:349e73813f432203920ae9ed04fc33a4026507e26ecc23ff2ab609d5b95b4206", "kube_rbac_proxy": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c19226019fe605b5ab10496fb0b7cb4712cb694a7ee1e26642d63d515ca6b7cc", "openssl": "registry.access.redhat.com/ubi8/openssl@sha256:3f781a07e59d164eba065dba7d8e7661ab2494b21199c379b65b0ff514a1b8d0", "ovn_kubernetes_microshift": "quay.io/microshift/ovn-kubernetes-singlenode@sha256:012e743363b5f15f442c238099d35a0c70343fd1d4dc15b0a57a7340a338ffdb", "pause": "k8s.gcr.io/pause:3.6", "service_ca_operator": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe468f25881e7b5ae8118c7d54b41a7fbb132a186f0156bbe46df0fd6a2f1f8", "odf_topolvm": "quay.io/rhceph-dev/odf4-odf-topolvm-rhel8@sha256:2855918d1849c99a835eb03c53ce07170c238111fd15d2fe50cd45611fcd1ceb", "ose_csi_ext_provisioner": "quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:c3b2417f8fcb8883275f0e613037f83133ccc3f91311a30688e4be520544ea4a", "ose_csi_ext_resizer": "quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:213f43d61b3a214a4a433c7132537be082a108d55005f2ba0777c2ea97489799", "topolvm-csi-snapshotter": "quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:734c095670d21b77f18c84670d6c9a7742be1d9151dca0da20f41858ede65ed8", "ose_csi_livenessprobe": "quay.io/rhceph-dev/openshift-ose-csi-livenessprobe@sha256:b05559aa038708ab448cfdfed2ca880726aed6cc30371fea4d6a42c972c0c728", "ose_csi_node_registrar": "quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:fb0f5e531847db94dcadc61446b9a892f6f92ddf282e192abf2fdef6c6af78f2", } } EOF

Also since we are not using a packaged installed Golang we will comment out the Golang build requirements in the specification file.

$ sed -e '/golang/ s/^#*/#/' -i ~/microshift/packaging//rpm/microshift.spec

With the release file updated we can proceed to make the packages of Microshift.

$ cd ~/microshift $ make rpm

After the rpm packages have completed being created proceed to install them.

$ sudo dnf localinstall -y ~/microshift/_output/rpmbuild/RPMS/*/*.rpm

Note the above installation will pull in the following dependencies:

NetworkManager-ovs aarch64 1:1.36.0-5.el9_0 rhel-9-for-aarch64-appstream-rpms conntrack-tools aarch64 1.4.5-10.el9_0.1 rhel-9-for-aarch64-appstream-rpms libnetfilter_cthelper aarch64 1.0.0-22.el9 rhel-9-for-aarch64-appstream-rpms libnetfilter_cttimeout aarch64 1.0.0-19.el9 rhel-9-for-aarch64-appstream-rpms libnetfilter_queue aarch64 1.0.5-1.el9 rhel-9-for-aarch64-appstream-rpms openvswitch-selinux-extra-policy noarch 1.0-31.el9fdp fast-datapath-for-rhel-9-aarch64-rpms openvswitch2.17 aarch64 2.17.0-49.el9fdp fast-datapath-for-rhel-9-aarch64-rpms unbound-libs aarch64 1.13.1-13.el9_0 rhel-9-for-aarch64-appstream-rpms

Set the pull-secret for the crio environment.

$ sudo vi /etc/crio/openshift-pull-secret

Now let's enable crio environment.

$ sudo systemctl enable crio --now

Manually pull the Arm topolvm images we defined in the release_arm.go file above. Again note these images are available publicly and require access to the repository.

$ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/odf4-odf-topolvm-rhel8@sha256:2855918d1849c99a835eb03c53ce07170c238111fd15d2fe50cd45611fcd1ceb $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:c3b2417f8fcb8883275f0e613037f83133ccc3f91311a30688e4be520544ea4a $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:213f43d61b3a214a4a433c7132537be082a108d55005f2ba0777c2ea97489799 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:734c095670d21b77f18c84670d6c9a7742be1d9151dca0da20f41858ede65ed8 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-livenessprobe@sha256:b05559aa038708ab448cfdfed2ca880726aed6cc30371fea4d6a42c972c0c728 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:fb0f5e531847db94dcadc61446b9a892f6f92ddf282e192abf2fdef6c6af78f2

At this point we are ready to start Microshift up.

$ sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.

Once the services have been started let's go ahead and create a hidden directory called .kube and copy the kubeconfig in there.

$ mkdir ~/.kube $ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

In a few minutes we can then issue a oc get pods -A and hopefully we see the following pods running.

$ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE openshift-dns dns-default-ph555 1/2 Running 0 6m57s openshift-dns node-resolver-brnj6 1/1 Running 0 6m57s openshift-ingress router-default-54bc9ff944-clr4r 1/1 Running 0 6m57s openshift-ovn-kubernetes ovnkube-master-t9q4w 4/4 Running 0 6m57s openshift-ovn-kubernetes ovnkube-node-f6z66 1/1 Running 0 6m57s openshift-service-ca service-ca-5bb4c5d7f7-zs2gg 1/1 Running 0 6m57s openshift-storage topolvm-controller-5d4f58ff8c-kl7v4 4/4 Running 0 6m57s openshift-storage topolvm-node-7wsh5 4/4 Running 0 6m57s

Hopefully this provides a glimpse as to what one can do on a Red Hat Enterprise Linux 9 virtual machine running on a Apple M1 processor once the Arm packages and images for Microshift become readily available. It could be a great way to do test development work before actually transferring it to the real edge device hardware that utilizes Arm.

Sunday, October 30, 2022

Walk Open Ports in OpenShift Pods

I was recently working with a customer who had the requirement to see what ports were in use inside a few of their OpenShift containers. This led me to produce a little script that allows me to walk all the ports in use across a single namespace/pod, all pods in a given namespace or across the entire cluster. Let's take a quick look at some examples of its usage in the rest of this short blog.

First let's demonstrate how to walk just a single namespace and pod. In this example I will use the openshift-storage namespace and the rook-ceph-operator container.

$ ./oc-ports.sh -n openshift-storage -p rook-ceph-operator-85d47cf975-l69r4 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.1.31 52558 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47746 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47768 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41142 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47742 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41174 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47796 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47150 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47800 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 59690 172.30.51.97 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47750 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41158 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52586 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 59816 172.30.244.234 443 786832572 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 37108 172.30.164.108 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47138 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41096 172.30.244.234 443 816946818 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47752 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 41154 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52568 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 55324 172.30.0.1 443 789789504 TCP_ESTABLISHED openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 47782 172.30.132.190 3300 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4 10.128.1.31 52576 10.130.0.71 6800 0 TCP_TIME_WAIT openshift-storage rook-ceph-operator-85d47cf975-l69r4

For our next test let's just provide a namespace and let the script enumerate through all the pods. The output from this is quite lengthy so I will truncate most of it.

$ ./oc-ports.sh -n openshift-machine-api LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 127.0.0.1 9191 0.0.0.0 0 205334 TCP_LISTEN openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 127.0.0.1 9191 127.0.0.1 59090 253479495 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 10.129.0.51 41020 172.30.0.1 443 653600899 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 127.0.0.1 59090 127.0.0.1 9191 253497374 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs 10.129.0.51 59338 172.30.0.1 443 669220775 TCP_ESTABLISHED openshift-machine-api cluster-autoscaler-operator-5786c7584c-kvzfs LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.31 54666 172.30.0.1 443 789856883 TCP_ESTABLISHED openshift-machine-api cluster-baremetal-operator-64f9997468-sj5xh LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.43 51300 172.30.0.1 443 548661167 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46504 172.30.0.1 443 535870258 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 35602 172.30.0.1 443 548638559 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46526 172.30.0.1 443 535879071 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46552 172.30.0.1 443 535882997 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 10.130.0.43 46520 172.30.0.1 443 535868211 TCP_ESTABLISHED openshift-machine-api machine-api-controllers-bf756c8f6-tsm69 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 127.0.0.1 8080 0.0.0.0 0 308875 TCP_LISTEN openshift-machine-api machine-api-operator-8595794ccc-lvdd5 127.0.0.1 41508 127.0.0.1 8080 308168 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 127.0.0.1 8080 127.0.0.1 41508 329823 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 10.128.0.41 39462 172.30.0.1 443 789799809 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 10.128.0.41 54512 172.30.0.1 443 816812059 TCP_ESTABLISHED openshift-machine-api machine-api-operator-8595794ccc-lvdd5 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 0.0.0.0 31815 0.0.0.0 0 632366 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 127.0.0.1 10248 0.0.0.0 0 15033 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 0.0.0.0 31625 0.0.0.0 0 612309 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 192.168.0.111 10250 0.0.0.0 0 29659 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 127.0.0.1 6060 0.0.0.0 0 55200 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 192.168.0.111 9100 0.0.0.0 0 69903 TCP_LISTEN openshift-machine-api metal3-6654b9c44c-4dgb2 (...) 127.0.0.1 80 127.0.0.1 35180 0 TCP_TIME_WAIT openshift-machine-api metal3-image-cache-tqrdq 192.168.0.110 51874 192.168.0.112 2379 653534755 TCP_ESTABLISHED openshift-machine-api metal3-image-cache-tqrdq 10.129.0.1 54378 172.30.0.1 443 653518566 TCP_ESTABLISHED openshift-machine-api metal3-image-cache-tqrdq LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.58 36866 172.30.0.1 443 653597918 TCP_ESTABLISHED openshift-machine-api metal3-image-customization-5c85d5f5f8-lbslg

Finally let's just run the command with the all option which will be even more output then our previous commands. For troubleshooting one could redirect the output to a file if needed. I went ahead and broke out of the run after a bit but the output gives one an idea of what they might see.

$ ./oc-ports.sh -a No resources found in default namespace. No resources found in kni22 namespace. No resources found in kube-node-lease namespace. No resources found in kube-public namespace. No resources found in kube-system namespace. LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.31 34770 172.30.0.1 443 535870264 TCP_ESTABLISHED open-cluster-management-agent klusterlet-5bb4b4f75c-7t9pr 10.130.0.31 56326 192.168.0.220 6443 548763858 TCP_ESTABLISHED open-cluster-management-agent klusterlet-5bb4b4f75c-7t9pr LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.2 35954 172.30.0.1 443 653602954 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-7phlw LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.5 37114 172.30.0.1 443 789857695 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-n8nd9 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.32 55454 192.168.0.220 6443 543474785 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-rdsgw 10.130.0.32 59700 172.30.0.1 443 535874302 TCP_ESTABLISHED open-cluster-management-agent klusterlet-registration-agent-7bb74955c9-rdsgw LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.129.0.33 34538 172.30.0.1 443 653582074 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-2hpgx LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.33 33190 172.30.0.1 443 535807747 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-f8gwf 10.130.0.33 54940 192.168.0.220 6443 543910317 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-f8gwf LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.128.0.7 37226 172.30.0.1 443 789858781 TCP_ESTABLISHED open-cluster-management-agent klusterlet-work-agent-cc96bc45c-wmcnf LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.36 35850 172.30.0.1 443 543588502 TCP_ESTABLISHED open-cluster-management-agent-addon application-manager-8f8589977-jhzd4 LocalAddr LocalPort RemoteAddr RemotePort Inode PortState Namespace Pod --------- --------- ---------- ---------- --------- --------- --------- ----------- 10.130.0.35 45864 172.30.0.1 443 538396535 TCP_ESTABLISHED open-cluster-management-agent-addon cert-policy-controller-fd4fd8d5d-vcxjh ^C

Hopefully this tool is useful in the future to anyone interested in connectivity among their pods in an OpenShift or Kubernetes cluster.

Saturday, October 22, 2022

Deploy Microshift on RHEL8 with Zigbee2MQTT Workload

Edge devices deployed in the field, whether its manufacturing, transportation, communication or space, pose very different operational business challenge from those of the data center and cloud computing. These motivate different engineering trade-offs for Kubernetes at the far edge than for cloud or near-edge scenarios. Enter Microshift whose goals are to address the following use cases:

  • Parsimonious use of system resources (CPU, memory, network, storage)
  • Toleration of bandwidth and latency networking constraints
  • Non-disruptive upgrades with rollback capabilities
  • Build and integrate with edge OSes like Fedora IoT and RHEL for Edge
  • Implement a consistent development and management experience on par with OpenShift

Let's explore this potential edge use case savior as we dive into a real world example of deploying Microshift and a functional workload in this blog.

Lab Environment

Before we start let't quickly review the lab environment we will be using for this demonstration. The lab device itself is a virtual machine with the following properties:

  • KVM Virtual Machine
  • RHEL 8.6 Installed
  • 4 vCpu
  • 8GB of Memory
  • 2 x 120GB disk
  • Zigbee 3.0 USB Dongle Plus E

When completed with the deployment of Microshift and our Zigbee2MQTT workload we will end up with an environment that looks like the following diagram:

Dependencies, Build Microshift & Installation

Now that we have had a review of the lab environment and a diagram of what the finished demonstration will look like let's go ahead and start building the environment out. To begin we need to install some dependency packages onto the Red Hat Enterprise Linux 8.6 host.

$ sudo dnf install -y git cockpit make golang selinux-policy-devel rpm-build bash-completion

Once the dependencies have been installed we can then clone down the Microshift repository from git.

$ git clone https://github.com/openshift/microshift.git ~/microshift Cloning into '/home/bschmaus/microshift'... remote: Enumerating objects: 47728, done. remote: Counting objects: 100% (766/766), done. remote: Compressing objects: 100% (264/264), done. remote: Total 47728 (delta 595), reused 538 (delta 502), pack-reused 46962 Receiving objects: 100% (47728/47728), 46.05 MiB | 10.85 MiB/s, done. Resolving deltas: 100% (24411/24411), done. Updating files: 100% (13866/13866), done.

Next let's change into the microshift directory.  From here we can issue a make rpm command to build the necessary Microshift rpms.

$ cd microshift/ $ make rpm fatal: No names found, cannot describe anything. BUILD=rpm \ SOURCE_GIT_COMMIT=60b605d2 \ SOURCE_GIT_TREE_STATE=clean RELEASE_BASE=4.12.0 \ RELEASE_PRE=4.12.0-0.microshift ./packaging/rpm/make-rpm.sh local # Creating local tarball tar: Removing leading `/home/bschmaus/microshift/packaging/rpm/../..' from member names tar: Removing leading `/home/bschmaus/microshift/packaging/rpm/../../' from member names # Building RPM packages fatal: No names found, cannot describe anything. fatal: No names found, cannot describe anything. warning: Missing build-id in /home/bschmaus/microshift/_output/rpmbuild/BUILDROOT/microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64/usr/bin/microshift

Once the packages finish building we can move onto adding the rhocp-4.11 & fast-datapath repositories using subscription-manager. We need them because when we go to install Microshift it will pull in some additional dependencies from those repositories.

$ sudo subscription-manager repos --enable rhocp-4.11-for-rhel-8-$(uname -i)-rpms --enable fast-datapath-for-rhel-8-$(uname -i)-rpms Repository 'rhocp-4.11-for-rhel-8-x86_64-rpms' is enabled for this system. Repository 'fast-datapath-for-rhel-8-x86_64-rpms' is enabled for this system.

Now we can initiate a local install pointing to the directory where the Microshift rpms were built.

$ sudo dnf localinstall -y ~/microshift/_output/rpmbuild/RPMS/*/*.rpm Updating Subscription Management repositories. Red Hat OpenShift Container Platform 4.11 for RHEL 8 x86_64 (RPMs) 183 kB/s | 122 kB 00:00 Fast Datapath for RHEL 8 x86_64 (RPMs) 938 kB/s | 486 kB 00:00 Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Installing: microshift x86_64 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 25 M microshift-networking x86_64 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 20 k microshift-selinux noarch 4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8 @commandline 20 k Installing dependencies: NetworkManager-ovs x86_64 1:1.36.0-7.el8_6 rhel-8-for-x86_64-baseos-rpms 171 k conntrack-tools x86_64 1.4.4-10.el8 rhel-8-for-x86_64-baseos-rpms 204 k cri-o x86_64 1.24.2-9.rhaos4.11.gitac6f687.el8 rhocp-4.11-for-rhel-8-x86_64-rpms 24 M cri-tools x86_64 1.24.2-6.el8 rhocp-4.11-for-rhel-8-x86_64-rpms 6.3 M libnetfilter_cthelper x86_64 1.0.0-15.el8 rhel-8-for-x86_64-baseos-rpms 24 k libnetfilter_cttimeout x86_64 1.0.0-11.el8 rhel-8-for-x86_64-baseos-rpms 24 k libnetfilter_queue x86_64 1.0.4-3.el8 rhel-8-for-x86_64-baseos-rpms 31 k openvswitch-selinux-extra-policy noarch 1.0-29.el8fdp fast-datapath-for-rhel-8-x86_64-rpms 16 k openvswitch2.17 x86_64 2.17.0-50.el8fdp fast-datapath-for-rhel-8-x86_64-rpms 17 M Transaction Summary =================================================================================================================================================================================================================== Install 12 Packages Total size: 72 M Total download size: 47 M Installed size: 300 M Downloading Packages: (1/9): openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch.rpm 59 kB/s | 16 kB 00:00 (2/9): openvswitch2.17-2.17.0-50.el8fdp.x86_64.rpm 6.4 MB/s | 17 MB 00:02 (3/9): cri-tools-1.24.2-6.el8.x86_64.rpm 2.2 MB/s | 6.3 MB 00:02 (4/9): libnetfilter_cttimeout-1.0.0-11.el8.x86_64.rpm 155 kB/s | 24 kB 00:00 (5/9): conntrack-tools-1.4.4-10.el8.x86_64.rpm 1.3 MB/s | 204 kB 00:00 (6/9): libnetfilter_cthelper-1.0.0-15.el8.x86_64.rpm 238 kB/s | 24 kB 00:00 (7/9): libnetfilter_queue-1.0.4-3.el8.x86_64.rpm 197 kB/s | 31 kB 00:00 (8/9): NetworkManager-ovs-1.36.0-7.el8_6.x86_64.rpm 1.3 MB/s | 171 kB 00:00 (9/9): cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64.rpm 4.5 MB/s | 24 MB 00:05 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 8.9 MB/s | 47 MB 00:05 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 1/12 Running scriptlet: microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 1/12 Installing : NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 2/12 Installing : libnetfilter_queue-1.0.4-3.el8.x86_64 3/12 Running scriptlet: libnetfilter_queue-1.0.4-3.el8.x86_64 3/12 Installing : libnetfilter_cthelper-1.0.0-15.el8.x86_64 4/12 Running scriptlet: libnetfilter_cthelper-1.0.0-15.el8.x86_64 4/12 Installing : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Running scriptlet: libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Installing : conntrack-tools-1.4.4-10.el8.x86_64 6/12 Running scriptlet: conntrack-tools-1.4.4-10.el8.x86_64 6/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Installing : openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 7/12 Running scriptlet: openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Installing : openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Running scriptlet: openvswitch2.17-2.17.0-50.el8fdp.x86_64 8/12 Installing : microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 9/12 Running scriptlet: microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 9/12 Warning: The unit file, source configuration file or drop-ins of NetworkManager.service changed on disk. Run 'systemctl daemon-reload' to reload units. Installing : cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 10/12 Running scriptlet: cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 10/12 Installing : cri-tools-1.24.2-6.el8.x86_64 11/12 Installing : microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Running scriptlet: microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Running scriptlet: microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 12/12 Running scriptlet: openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 12/12 Running scriptlet: microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Verifying : cri-tools-1.24.2-6.el8.x86_64 1/12 Verifying : cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 2/12 Verifying : openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch 3/12 Verifying : openvswitch2.17-2.17.0-50.el8fdp.x86_64 4/12 Verifying : libnetfilter_cttimeout-1.0.0-11.el8.x86_64 5/12 Verifying : conntrack-tools-1.4.4-10.el8.x86_64 6/12 Verifying : libnetfilter_cthelper-1.0.0-15.el8.x86_64 7/12 Verifying : libnetfilter_queue-1.0.4-3.el8.x86_64 8/12 Verifying : NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 9/12 Verifying : microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch 10/12 Verifying : microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 11/12 Verifying : microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 12/12 Installed products updated. Installed: NetworkManager-ovs-1:1.36.0-7.el8_6.x86_64 conntrack-tools-1.4.4-10.el8.x86_64 cri-o-1.24.2-9.rhaos4.11.gitac6f687.el8.x86_64 cri-tools-1.24.2-6.el8.x86_64 libnetfilter_cthelper-1.0.0-15.el8.x86_64 libnetfilter_cttimeout-1.0.0-11.el8.x86_64 libnetfilter_queue-1.0.4-3.el8.x86_64 microshift-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 microshift-networking-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.x86_64 microshift-selinux-4.12.0-4.10.0_0.microshift_2022_09_08_132255_196_g60b605d2.el8.noarch openvswitch-selinux-extra-policy-1.0-29.el8fdp.noarch openvswitch2.17-2.17.0-50.el8fdp.x86_64 Complete!

Before we can start Microshift we need to configure a few more items. One of those are the firewall rules.

$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 success $ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1 success $ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=5353/udp success $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp success $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/udp success $ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp success $ sudo firewall-cmd --reload success

After the firewall rules we need to ensure we have a rhel volume group created because TopoLVM which comes as part of Microshift will be, by default, using a volume group called rhel. On our system I actually have a second disk sdb that I will be using to create that volume group.

$ sudo lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 120G 0 disk ├─sda1 8:1 0 600M 0 part /boot/efi ├─sda2 8:2 0 1G 0 part /boot └─sda3 8:3 0 118.4G 0 part ├─rhel_sno3-root 253:0 0 70G 0 lvm / ├─rhel_sno3-swap 253:1 0 9.6G 0 lvm [SWAP] └─rhel_sno3-home 253:2 0 38.8G 0 lvm /home sdb 8:16 0 120G 0 disk sr0 11:0 1 10.7G 0 rom

We can generate the volume group by using the vgcreate command.

$ sudo vgcreate rhel /dev/sdb Physical volume "/dev/sdb" successfully created. Volume group "rhel" successfully created

We can then validate the volume group was created with the vgs command.

$ sudo vgs VG #PV #LV #SN Attr VSize VFree rhel 1 0 0 wz--n- <120.00g <120.00g rhel_sno3 1 3 0 wz--n- <118.43g 0

Another configuration item we need is to set our pull-secret keys in the file openshift-pull-secret under the /etc/crio directory.

$ sudo vi /etc/crio/openshift-pull-secret

And finally since we will be running oc & kubectl commands against Microshift we need to install the openshift-clients.

$ sudo dnf install -y openshift-clients

We have now reached the point where we can enable and start both crio and Microshift.

$ sudo systemctl enable crio --now Created symlink /etc/systemd/system/cri-o.service → /usr/lib/systemd/system/crio.service. Created symlink /etc/systemd/system/multi-user.target.wants/crio.service → /usr/lib/systemd/system/crio.service. $ sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.

Once the services have been started let's go ahead and create a hidden directory called .kube and copy the kubeconfig in there.

$ mkdir ~/.kube $ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

In a few minutes we can then issue a oc get pods -A and hopefully we see the following pods running.

$ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE openshift-dns dns-default-h58l2 2/2 Running 0 34m openshift-dns node-resolver-nj296 1/1 Running 0 34m openshift-ingress router-default-559ff9d676-vvzjs 1/1 Running 0 34m openshift-ovn-kubernetes ovnkube-master-s74f5 4/4 Running 0 34m openshift-ovn-kubernetes ovnkube-node-dtbbv 1/1 Running 0 34m openshift-service-ca service-ca-5f9bc879d8-dnxcj 1/1 Running 0 34m openshift-storage topolvm-controller-5cbd9d9684-zs7v6 4/4 Running 0 34m openshift-storage topolvm-node-779zz 4/4 Running 0 34m

If everything looks correct in the last step we can proceed to deploying our workloads.

Deployment Workloads

For my workload I am choosing to run Zigbee2MQTT which allows one to control home smart devices via the Zigbee protocol. The Zigbee protocol is use in such applications as:

  • Home automation
  • Wireless sensor networks
  • Industrial control systems
  • Embedded sensing
  • Medical data collection
  • Smoke and intruder warning
  • Building automation
  • Remote wireless microphone configuration

Zigbee2MQTT also has a few dependencies we need to install as well: Ser2sock and Mosquitto. Let's go ahead and get started by creating a namespace called zigbee.

$ oc create namespace zigbee namespace/zigbee created

Once the namespace is created we can go ahead and create a service and deployment file for the Ser2sock application. Ser2sock allows a serial device to communicate over TCP/IP and will enable us to communicate with the Sonoff Zigbee dongle.

$ cat << EOF > ser2sock-service.yaml apiVersion: v1 kind: Service metadata: name: ser2sock namespace: zigbee spec: ports: - name: server port: 10000 protocol: TCP targetPort: server selector: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock sessionAffinity: None type: ClusterIP EOF $ cat << EOF > ser2sock-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: ser2sock namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: ser2sock app.kubernetes.io/name: ser2sock spec: automountServiceAccountToken: true containers: - env: - name: BAUD_RATE value: "115200" - name: LISTENER_PORT value: "10000" - name: SERIAL_DEVICE value: /dev/ttyACM0 - name: TZ value: UTC image: tenstartups/ser2sock:latest imagePullPolicy: Always livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 name: ser2sock ports: - containerPort: 10000 name: server protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 10000 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 EOF

With the custom resource files generated let's create them against the Microshift environment.

$ oc create -f ser2sock-service.yaml service/ser2sock created $ oc create -f ser2sock-deployment.yaml deployment.apps/ser2sock created

We can validate the service was created and also validate the pod is running correctly by checking the initial logs of the pod.

$ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE ser2sock-79db88f44c-lfg74 1/1 Running 0 21s $ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 41s $ oc logs ser2sock-79db88f44c-lfg74 -n zigbee [✔] Serial 2 Socket Relay version V1.5.5 starting [✔] Listening socket created on port 10000 [✔] Start wait loop using ser2sock communication mode [✔] Opened com port at /dev/ttyACM0 [✔] Setting speed 115200 [✔] Set speed successful

With Ser2sock up and running we can proceed to configure and install Mosquitto. Mosquitto is a message broker that implements the MQTT protocol and enables a lightweight method of carrying out messaging using a publish/subscribe model. We need to create three custom resource files here: a configmap, a service and a deployment.

$ cat << EOF > mosquitto-configmap.yaml apiVersion: v1 data: mosquitto.conf: | per_listener_settings false listener 1883 allow_anonymous true kind: ConfigMap metadata: name: mosquitto-config namespace: zigbee EOF $ cat << EOF > mosquitto-service.yaml apiVersion: v1 kind: Service metadata: name: mosquitto namespace: zigbee spec: ports: - name: mqtt port: 1883 protocol: TCP targetPort: mqtt selector: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto sessionAffinity: None type: ClusterIP EOF $ cat << EOF > mosquitto-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: mosquitto namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: mosquitto app.kubernetes.io/name: mosquitto spec: automountServiceAccountToken: true containers: - image: eclipse-mosquitto:2.0.14 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 name: mosquitto ports: - containerPort: 1883 name: mqtt protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 1883 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /mosquitto/config/mosquitto.conf name: mosquitto-config subPath: mosquitto.conf dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - configMap: defaultMode: 420 name: mosquitto-config name: mosquitto-config EOF

With our Mosquitto customer resource files created we can now apply them to the Microshift environment.

$ oc create -f mosquitto-configmap.yaml configmap/mosquitto-config created $ oc create -f mosquitto-service.yaml service/mosquitto created $ oc create -f mosquitto-deployment.yaml deployment.apps/mosquitto created

Again we can validate that everything is running correctly by confirming the pod is running, the service was created and by examaning the logs files of the pod.

$ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE mosquitto-84c6bfcd44-6qb74 1/1 Running 0 32s ser2sock-79db88f44c-lfg74 1/1 Running 0 1m50s $ oc create -f mosquitto-deployment.yaml deployment.apps/mosquitto created [bschmaus@sno3 ~]$ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mosquitto ClusterIP 10.43.72.122 <none> 1883/TCP 52s ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 2m3s $ oc logs mosquitto-84c6bfcd44-6qb74 -n zigbee chown: /mosquitto/config/mosquitto.conf: Read-only file system 1666223617: mosquitto version 2.0.14 starting 1666223617: Config loaded from /mosquitto/config/mosquitto.conf. 1666223617: Opening ipv4 listen socket on port 1883. 1666223617: Opening ipv6 listen socket on port 1883. 1666223617: mosquitto version 2.0.14 running

At this point we are now ready to install Zigbee2MQTT. The deployment will require four custom resource files: a configmap, a pvc, a service and a deployment file.

cat << EOF > zigbee2mqtt-configmap.yaml apiVersion: v1 data: configuration.yaml: | advanced: homeassistant_discovery_topic: homeassistant homeassistant_status_topic: homeassistant/status last_seen: ISO_8601 log_level: info log_output: - console network_key: GENERATE experimental: new_api: true frontend: port: 8080 homeassistant: true mqtt: base_topic: zigbee2mqtt include_device_information: true server: mqtt://mosquitto permit_join: true serial: adapter: ezsp port: tcp://ser2sock:10000 kind: ConfigMap metadata: name: zigbee2mqtt-settings namespace: zigbee EOF cat << EOF > zigbee2mqtt-pvc.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: name: zigbee2mqtt-data namespace: zigbee spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi storageClassName: topolvm-provisioner volumeMode: Filesystem EOF cat << EOF > zigbee2mqtt-service.yaml apiVersion: v1 kind: Service metadata: name: zigbee2mqtt namespace: zigbee spec: ports: - name: http port: 8080 protocol: TCP targetPort: http selector: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt sessionAffinity: None type: ClusterIP EOF cat << EOF > zigbee2mqtt-deployment.yaml apiVersion: apps/v1 kind: Deployment metadata: name: zigbee2mqtt namespace: zigbee spec: progressDeadlineSeconds: 600 replicas: 1 revisionHistoryLimit: 3 selector: matchLabels: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt strategy: type: Recreate template: metadata: creationTimestamp: null labels: app.kubernetes.io/instance: zigbee2mqtt app.kubernetes.io/name: zigbee2mqtt spec: automountServiceAccountToken: true containers: - env: - name: ZIGBEE2MQTT_DATA value: /data image: koenkk/zigbee2mqtt:1.19.1 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 name: zigbee2mqtt ports: - containerPort: 8080 name: http protocol: TCP readinessProbe: failureThreshold: 3 periodSeconds: 10 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 resources: {} securityContext: privileged: true startupProbe: failureThreshold: 30 periodSeconds: 5 successThreshold: 1 tcpSocket: port: 8080 timeoutSeconds: 1 terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /data name: data - mountPath: /app/configuration.yaml name: zigbee2mqtt-settings subPath: configuration.yaml dnsPolicy: ClusterFirst enableServiceLinks: true restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 volumes: - name: data persistentVolumeClaim: claimName: zigbee2mqtt-data - configMap: defaultMode: 420 name: zigbee2mqtt-settings name: zigbee2mqtt-settings EOF

Once the custom resource files have been generated we can go ahead and create them on the Microshift environment.

$ oc create -f zigbee2mqtt-configmap.yaml configmap/zigbee2mqtt-settings created $ oc create -f zigbee2mqtt-pvc.yaml persistentvolumeclaim/zigbee2mqtt-data created $ oc create -f zigbee2mqtt-service.yaml service/zigbee2mqtt created $ oc create -f zigbee2mqtt-deployment.yaml deployment.apps/zigbee2mqtt created

We can also validate Zigbee2MQTT is working by validating the pvc, service, pod state and log files of the pod.

$ oc get pvc -n zigbee NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE zigbee2mqtt-data Bound pvc-0171efa4-49c2-42ce-9c6e-a3a730f61020 1Gi RWO topolvm-provisioner 22s $ oc get svc -n zigbee NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE mosquitto ClusterIP 10.43.72.122 <none> 1883/TCP 3m17s ser2sock ClusterIP 10.43.226.155 <none> 10000/TCP 4m28s zigbee2mqtt ClusterIP 10.43.164.117 <none> 8080/TCP 14s $ oc get pods -n zigbee NAME READY STATUS RESTARTS AGE mosquitto-84c6bfcd44-4lc88 1/1 Running 0 3m33s ser2sock-79db88f44c-9rvc2 1/1 Running 0 4m41s zigbee2mqtt-5c6bdcc6ff-flk54 1/1 Running 0 27s $ oc logs zigbee2mqtt-5c6bdcc6ff-flk54 -n zigbee Using '/data' as data directory Creating configuration file... Zigbee2MQTT:info 2022-10-20 19:46:54: Logging to console only' Zigbee2MQTT:info 2022-10-20 19:46:54: Starting Zigbee2MQTT version 1.19.1 (commit #9bd4693) Zigbee2MQTT:info 2022-10-20 19:46:54: Starting zigbee-herdsman (0.13.111) Assertion failed Assertion failed Assertion failed Zigbee2MQTT:info 2022-10-20 19:46:57: zigbee-herdsman started Zigbee2MQTT:info 2022-10-20 19:46:57: Coordinator firmware version: '{"meta":{"maintrel":"3 ","majorrel":"6","minorrel":"10","product":8,"revision":"6.10.3.0 build 297"},"type":"EZSP v8"}' Zigbee2MQTT:info 2022-10-20 19:46:57: Currently 0 devices are joined: Zigbee2MQTT:warn 2022-10-20 19:46:57: `permit_join` set to `true` in configuration.yaml. Zigbee2MQTT:warn 2022-10-20 19:46:57: Allowing new devices to join. Zigbee2MQTT:warn 2022-10-20 19:46:57: Set `permit_join` to `false` once you joined all devices. Zigbee2MQTT:info 2022-10-20 19:46:57: Zigbee: allowing new devices to join. Zigbee2MQTT:info 2022-10-20 19:46:57: Started frontend on port 0.0.0.0:8080 Zigbee2MQTT:info 2022-10-20 19:46:57: Connecting to MQTT server at mqtt://mosquitto Zigbee2MQTT:info 2022-10-20 19:46:57: Connected to MQTT server Zigbee2MQTT:info 2022-10-20 19:46:57: MQTT publish: topic 'zigbee2mqtt/bridge/state', payload 'online' Zigbee2MQTT:info 2022-10-20 19:46:57: MQTT publish: topic 'zigbee2mqtt/bridge/config', payload '{"commit":"9bd4693","coordinator":{"meta":{"maintrel":"3 ","majorrel":"6","minorrel":"10","product":8,"revision":"6.10.3.0 build 297"},"type":"EZSP v8"},"log_level":"info","network":{"channel":11,"extendedPanID":221,"panID":6754},"permit_join":true,"version":"1.19.1"}'

And if everything validated appropriately above we should be able to hit the web interface of Zigbee2MQTT.




Hopefully this gives anyone interested in running Microshift an idea of how it can be used. Considering if one is familar OpenShift, the API compatability of Microshift makes it rather easy to roll out applications on this light weight platform!