Showing posts with label aarch64. Show all posts
Showing posts with label aarch64. Show all posts

Wednesday, January 01, 2025

Practical Example of Red Hat CoreOS Layering

In Red Hat OpenShift 4.14 a new concept called image layering was introduced which allows one to build a container layer they can then apply on top of the Red Hat CoreOS layer. More details about it can be found here. We need to leverage this technology to apply an image layer that contains irqbalance since this package is not part of the base Red Hat CoreOS aarch64 image nor is it available as an extension. Irqbalance will become part of Red Hat CoreOS for aarch64 in the future based on this merge request. The steps below will describe how create, build and apply the image layer containing the irqbalance package along with enabling it for aarch64.

The first step is to get the current rhel-coreos image from the cluster where we will be applying the image layer.  We can use the oc adm release info command to obtain this information from our OpenShift 4.15.23 cluster.

$ oc adm release info --image-for rhel-coreos quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2f1b7530956b765f1b0337b824fde28d6987b519eec0aaadc9d261e9fd1e550

Next we take the release image output and place it into a Dockerfile that will have a run command to install and enable the package for irqbalance.

$ cat <<EOF > Dockerfile.irqbalance FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2f1b7530956b765f1b0337b824fde28d6987b519eec0aaadc9d261e9fd1e550 RUN rpm-ostree install irqbalance && \ #install the irqbalance package systemctl enable irqbalance && \ #enable irqbalance service rm -r -f /etc/yum.repos.d/* \ #remove entitlements from the system the image is being built on otherwise creates issues ostree container commit #commit the ostree container EOF

Once we have created the Dockerfile we can use podman to build the container. Note that I performed this process on an aarch64 host, specifically a Ampere Altra Developer Workstation with Red Hat Enterprise Linux 9 because I wanted my image to be made for an aarch64 Red Hat CoreOS host.  Note I am tagging my image with the version of OpenShift to remind me which OpenShift version.

$ podman build -t quay.io/redhat_emp1/ecosys-nvidia/ocp-4.15-irqbalance:4.15.23 -f Dockerfile.irqbalance . STEP 1/2: FROM quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2f1b7530956b765f1b0337b824fde28d6987b519eec0aaadc9d261e9fd1e550 Trying to pull quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a2f1b7530956b765f1b0337b824fde28d6987b519eec0aaadc9d261e9fd1e550... Getting image source signatures Copying blob b839e1a7e4d1 done | Copying blob d178eea3fd50 done | (...) Copying blob 413ee3c4305f done | Copying config 9f41514ccf done | Writing manifest to image destination STEP 2/2: RUN rpm-ostree install irqbalance && systemctl enable irqbalance && ostree container commit Enabled rpm-md repositories: rhel-9-for-aarch64-appstream-rpms rhel-9-for-aarch64-baseos-rpms Updating metadata for 'rhel-9-for-aarch64-appstream-rpms'...done Updating metadata for 'rhel-9-for-aarch64-baseos-rpms'...done Importing rpm-md...done rpm-md repo 'rhel-9-for-aarch64-appstream-rpms'; generated: 2024-09-12T14:26:05Z solvables: 19074 rpm-md repo 'rhel-9-for-aarch64-baseos-rpms'; generated: 2024-09-11T06:47:28Z solvables: 7179 Resolving dependencies...done Will download: 1 package (71.6?kB) Downloading from 'rhel-9-for-aarch64-baseos-rpms'...done Installing 1 packages: irqbalance-2:1.9.2-3.el9.aarch64 (rhel-9-for-aarch64-baseos-rpms) Installing: irqbalance-2:1.9.2-3.el9.aarch64 (rhel-9-for-aarch64-baseos-rpms) Created symlink /etc/systemd/system/multi-user.target.wants/irqbalance.service → /usr/lib/systemd/system/irqbalance.service. COMMIT quay.io/redhat_emp1/ecosys-nvidia/ocp-4.15-irqbalance:4.15.23 --> 624c70f71a77 Successfully tagged quay.io/redhat_emp1/ecosys-nvidia/ocp-4.15-irqbalance:4.15.23 624c70f71a77ce3c30fd973b4afc09fc4558b43e64ec43851de2cf4d7ad7f6a0

Once the image is built we can push it to our favorite location in the registry. This image should be pushed to a location that we have access to from our cluster.

$ podman push quay.io/redhat_emp1/ecosys-nvidia/ocp-4.15-irqbalance:4.15.23 Getting image source signatures Copying blob 79bb3562fc5d done | Copying blob 6dbb46d6d565 done | (...) Copying blob 00ad2dbb774b done | Copying config 624c70f71a done | Writing manifest to image destination

Once the image is in a registry location we can generate a machine configuration file and specify the osImageURL with the location of our image. 

$ cat <<EOF >irqbalance-machine.yaml apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: irqbalance-layer-machineconfig spec: osImageURL: quay.io/redhat_emp1/ecosys-nvidia/ocp-4.15-irqbalance:4.15.23 EOF

The machine configuration file we created can then be applied to the cluster.  We can use oc create to do the work here and be aware the node(s) where this machine configuration gets applied will reboot.

$ oc create -f irqbalance-machine.yaml machineconfig.machineconfiguration.openshift.io/irqbalance-layer-machineconfig created

After the reboot we can validate that irqbalance is successfully installed and running by going into a debug container and checking for the package and the systemctl status.

$ oc debug node/$(oc get node -o json | jq -r '.items[0].metadata.name') Starting pod/nvd-srv-37nvidiaengrdu2dcredhatcom-debug-7r745 ... To use host binaries, run `chroot /host` Pod IP: 10.6.135.16 If you don't see a command prompt, try pressing enter. sh-4.4# chroot /host
sh-5.1# rpm -q irqbalance irqbalance-1.9.2-3.el9.aarch64
sh-5.1# systemctl status irqbalance ● irqbalance.service - irqbalance daemon Loaded: loaded (/usr/lib/systemd/system/irqbalance.service; enabled; preset: enabled) Active: active (running) since Thu 2024-09-12 19:20:27 UTC; 1h 21min ago Docs: man:irqbalance(1) https://github.com/Irqbalance/irqbalance Main PID: 14874 (irqbalance) Tasks: 2 (limit: 783503) Memory: 5.2M CPU: 3.980s CGroup: /system.slice/irqbalance.service └─14874 /usr/sbin/irqbalance --foreground sh-5.1#
Again this was a simple practical example of using image layering in OpenShift but hopefully this article gives a good enough example that one could expand on it to provide other packages, configurations and files in the event they need to be applied to the Red Hat CoreOS image.

Sunday, November 06, 2022

Microshift, RHEL9 & Apple M1 Virtual Machine


I previously had written a blog around Microshift running on a virtual machine with Fedora 35 on a MacBook Pro with an M1 processor. However that blog was using Fedora and an older version of Microshift based on the 4.8 release of OpenShift. However in this blog I want to demonstrate running the virtual machine with Red Hat Enterprise Linux 9 and Microshift based on the future 4.12 OpenShift release.

Lab Environment

The following lab environment was created in order to provide this demonstration, which includes the following:

  • MacBook Pro
    • M1 Max Processor
    • 32GB of memory
    • 1.8TB SSD
  • MacOS Ventura
  • UTM Virtualization UI 4.1.0
  • 1 Virtual Machine using Apple Virtualization Framework
    • 4 cores of vCPU
    • 8GB memory
    • 256GB disk
    • Red Hat Enterprise Linux 9
    • Static ip address configured

I have already documented how to install Red Hat Enterprise Linux 9 on an M1 virtual machine and the video for it can be found here.

Microshift Enhancements:

  • OVN replaces Flannel as CNI
  • TopoLVM replaces HostPathProvisioning as CSI storage backend

Build->Deploy->Run Microshift

To get started lets ensure we have the right repositories enabled on our Red Hat Enterprise Linux 9 virtual machine. We will go ahead and register the system, disable all repositories and then enable the repositories we will need.

$ sudo subscription-manager register $ sudo subscription-manager repos --disable=* $ sudo subscription-manager repos --enable=rhel-9-for-aarch64-baseos-rpms --enable=rhel-9-for-aarch64-appstream-rpms --enable=rhel-9-for-aarch64-supplementary-rpms --enable=fast-datapath-for-rhel-9-aarch64-rpms Repository 'rhel-9-for-aarch64-baseos-rpms' is enabled for this system. Repository 'rhel-9-for-aarch64-appstream-rpms' is enabled for this system. Repository 'rhel-9-for-aarch64-supplementary-rpms' is enabled for this system. Repository 'fast-datapath-for-rhel-9-aarch64-rpms' is enabled for this system.

Now let's install some of the pre-requisite packages we will need. Notice we are not installing Golang here and that is because we need a more recent version then what is shipping with Red Hat Enterprise Linux 9.

$ sudo dnf install -y git cockpit make selinux-policy-devel rpm-build bash-completion jq gcc

Now let's fetch Golang with wget and then extract it into /usr/local. We can also make a soft link from /usr/bin/go to the actual binary for convenience.

$ cd ~/ $ wget https://go.dev/dl/go1.19.3.linux-arm64.tar.gz $ sudo rm -rf /usr/local/go && sudo tar -C /usr/local -xzf go1.19.3.linux-arm64.tar.gz $ sudo ln -s /usr/local/go/bin/go /usr/bin/go

Next we can open up some firewall rules that are required for access when running Microshift.

$ sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16 $ sudo firewall-cmd --permanent --zone=trusted --add-source=169.254.169.1 $ sudo firewall-cmd --permanent --zone=public --add-port=80/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=443/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=5353/udp $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/tcp $ sudo firewall-cmd --permanent --zone=public --add-port=30000-32767/udp $ sudo firewall-cmd --permanent --zone=public --add-port=6443/tcp $ sudo firewall-cmd --reload

We also need to manually grab the following packages from https://access.redhat.com as we could not find the Red Hat Enterprise Linux 9 equivalents in our repositories.

$ ls -1 *.el8.aarch64* cri-o-1.24.3-5.rhaos4.11.gitc4567c0.el8.aarch64.rpm cri-tools-1.24.2-6.el8.aarch64.rpm openshift-clients-4.11.0-202209201358.p0.g262ac9c.assembly.stream.el8.aarch64.rpm

Once the packages finish downloading we can install them.

$ sudo dnf localinstall cri-tools-1.24.2-6.el8.aarch64.rpm cri-o-1.24.3-5.rhaos4.11.gitc4567c0.el8.aarch64.rpm openshift-clients-4.11.0-202209201358.p0.g262ac9c.assembly.stream.el8.aarch64.rpm

Next we can go ahead and clone the GitHub repository for Microshift.

$ git clone https://github.com/openshift/microshift.git ~/microshift

Update the following release_arm64.go file to the following based off of this Github issue. Note these image location are not publicly accessible until the Arm version of Microshift images because readily available.

$ cp ~/microshift/pkg/release/release_arm64.go ~/microshift/pkg/release/release_arm64.go.bak $ cat << EOF > ~/microshift/pkg/release/release_arm64.go /* Copyright © 2021 MicroShift Contributors Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. */ package release // For the amd64 architecture we use the existing and tested and // published OCP or other component upstream images func init() { Image = map[string]string{ "cli": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe65a036a65af078f6f61017ae96e141dbb203f3602ecaca7f63ec8f58a1f6c6", "coredns": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5b3d024b2586bd0bf7b1315b2866f36a9b8b0acd23f0a9c6459371234dc8429", "haproxy_router": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:349e73813f432203920ae9ed04fc33a4026507e26ecc23ff2ab609d5b95b4206", "kube_rbac_proxy": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c19226019fe605b5ab10496fb0b7cb4712cb694a7ee1e26642d63d515ca6b7cc", "openssl": "registry.access.redhat.com/ubi8/openssl@sha256:3f781a07e59d164eba065dba7d8e7661ab2494b21199c379b65b0ff514a1b8d0", "ovn_kubernetes_microshift": "quay.io/microshift/ovn-kubernetes-singlenode@sha256:012e743363b5f15f442c238099d35a0c70343fd1d4dc15b0a57a7340a338ffdb", "pause": "k8s.gcr.io/pause:3.6", "service_ca_operator": "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe468f25881e7b5ae8118c7d54b41a7fbb132a186f0156bbe46df0fd6a2f1f8", "odf_topolvm": "quay.io/rhceph-dev/odf4-odf-topolvm-rhel8@sha256:2855918d1849c99a835eb03c53ce07170c238111fd15d2fe50cd45611fcd1ceb", "ose_csi_ext_provisioner": "quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:c3b2417f8fcb8883275f0e613037f83133ccc3f91311a30688e4be520544ea4a", "ose_csi_ext_resizer": "quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:213f43d61b3a214a4a433c7132537be082a108d55005f2ba0777c2ea97489799", "topolvm-csi-snapshotter": "quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:734c095670d21b77f18c84670d6c9a7742be1d9151dca0da20f41858ede65ed8", "ose_csi_livenessprobe": "quay.io/rhceph-dev/openshift-ose-csi-livenessprobe@sha256:b05559aa038708ab448cfdfed2ca880726aed6cc30371fea4d6a42c972c0c728", "ose_csi_node_registrar": "quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:fb0f5e531847db94dcadc61446b9a892f6f92ddf282e192abf2fdef6c6af78f2", } } EOF

Also since we are not using a packaged installed Golang we will comment out the Golang build requirements in the specification file.

$ sed -e '/golang/ s/^#*/#/' -i ~/microshift/packaging//rpm/microshift.spec

With the release file updated we can proceed to make the packages of Microshift.

$ cd ~/microshift $ make rpm

After the rpm packages have completed being created proceed to install them.

$ sudo dnf localinstall -y ~/microshift/_output/rpmbuild/RPMS/*/*.rpm

Note the above installation will pull in the following dependencies:

NetworkManager-ovs aarch64 1:1.36.0-5.el9_0 rhel-9-for-aarch64-appstream-rpms conntrack-tools aarch64 1.4.5-10.el9_0.1 rhel-9-for-aarch64-appstream-rpms libnetfilter_cthelper aarch64 1.0.0-22.el9 rhel-9-for-aarch64-appstream-rpms libnetfilter_cttimeout aarch64 1.0.0-19.el9 rhel-9-for-aarch64-appstream-rpms libnetfilter_queue aarch64 1.0.5-1.el9 rhel-9-for-aarch64-appstream-rpms openvswitch-selinux-extra-policy noarch 1.0-31.el9fdp fast-datapath-for-rhel-9-aarch64-rpms openvswitch2.17 aarch64 2.17.0-49.el9fdp fast-datapath-for-rhel-9-aarch64-rpms unbound-libs aarch64 1.13.1-13.el9_0 rhel-9-for-aarch64-appstream-rpms

Set the pull-secret for the crio environment.

$ sudo vi /etc/crio/openshift-pull-secret

Now let's enable crio environment.

$ sudo systemctl enable crio --now

Manually pull the Arm topolvm images we defined in the release_arm.go file above. Again note these images are available publicly and require access to the repository.

$ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/odf4-odf-topolvm-rhel8@sha256:2855918d1849c99a835eb03c53ce07170c238111fd15d2fe50cd45611fcd1ceb $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-provisioner@sha256:c3b2417f8fcb8883275f0e613037f83133ccc3f91311a30688e4be520544ea4a $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-resizer@sha256:213f43d61b3a214a4a433c7132537be082a108d55005f2ba0777c2ea97489799 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-external-snapshotter@sha256:734c095670d21b77f18c84670d6c9a7742be1d9151dca0da20f41858ede65ed8 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-livenessprobe@sha256:b05559aa038708ab448cfdfed2ca880726aed6cc30371fea4d6a42c972c0c728 $ sudo crictl pull --auth "<YOUR AUTH TOKEN>" quay.io/rhceph-dev/openshift-ose-csi-node-driver-registrar@sha256:fb0f5e531847db94dcadc61446b9a892f6f92ddf282e192abf2fdef6c6af78f2

At this point we are ready to start Microshift up.

$ sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.

Once the services have been started let's go ahead and create a hidden directory called .kube and copy the kubeconfig in there.

$ mkdir ~/.kube $ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

In a few minutes we can then issue a oc get pods -A and hopefully we see the following pods running.

$ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE openshift-dns dns-default-ph555 1/2 Running 0 6m57s openshift-dns node-resolver-brnj6 1/1 Running 0 6m57s openshift-ingress router-default-54bc9ff944-clr4r 1/1 Running 0 6m57s openshift-ovn-kubernetes ovnkube-master-t9q4w 4/4 Running 0 6m57s openshift-ovn-kubernetes ovnkube-node-f6z66 1/1 Running 0 6m57s openshift-service-ca service-ca-5bb4c5d7f7-zs2gg 1/1 Running 0 6m57s openshift-storage topolvm-controller-5d4f58ff8c-kl7v4 4/4 Running 0 6m57s openshift-storage topolvm-node-7wsh5 4/4 Running 0 6m57s

Hopefully this provides a glimpse as to what one can do on a Red Hat Enterprise Linux 9 virtual machine running on a Apple M1 processor once the Arm packages and images for Microshift become readily available. It could be a great way to do test development work before actually transferring it to the real edge device hardware that utilizes Arm.

Saturday, October 08, 2022

Deploy Microshift on Apple M1 Virtual Machine

I have been experimenting recently with the Apple Virtualization Framework in the Ventura MacOS Beta. I have written up a previous blog around using Red Hat Advanced Cluster Management for Kubernetes to deploy Single Node OpenShift on a virtual machine. There is also a video me installing Red Hat Enterprise Linux 9 on a similarly configured virtual machine on YouTube. Today however I want to explore installing Microshift on a similar configuration like I used in the previous write up and videos. After all Microshift is an optimized version of OpenShift & Kubernetes for small form factor and edge type environments. Let's explore what a basic installation looks like then in the remainder of this blog.

Lab Environment

The following lab environment was created in order to test this experiment, which includes the following:

  • MacBook Pro
    • M1 Max Processor
    • 32GB of memory
    • 1.8TB SSD
  • MacOS Ventura Beta 8
  • UTM Virtualization UI
  • 1 Virtual Machine using Apple Virtualization Framework
    • 8 cores of vCPU
    • 24GB memory
    • 120GB disk
    • Fedora 35 aarch64 installed
    • Static ip address configured

Now that we have a brief overview of the environment let't move onto installing Microshift.

Configure & Deploy Microshift

Installing Microshift is not that difficult to get up and running and takes just a few simple steps. To begin this process let's first log into the Fedora 35 host via ssh.

$ ssh bschmaus@10.0.0.25 The authenticity of host '10.0.0.25 (10.0.0.25)' can't be established. ECDSA key fingerprint is SHA256:OK9JNTWDGmYDnsb+ka4ynw91ihXGnsaa+Np8ExmR7is. Are you sure you want to continue connecting (yes/no/[fingerprint])? yes Warning: Permanently added '10.0.0.25' (ECDSA) to the list of known hosts. bschmaus@10.0.0.25's password: Web console: https://m1:9090/ or https://10.0.0.25:9090/

Once we have established our ssh connection I want to execute a few commands to show this is indeed a Apple Virtualization Framework virtual machine running on an M1 process which has an Arm core. First we will run dmidecode to show the system information.

$ sudo dmidecode|more # dmidecode 3.3 Getting SMBIOS data from sysfs. SMBIOS 3.3.0 present. Table at 0x64BDE9000. Handle 0x0000, DMI type 1, 27 bytes System Information Manufacturer: Apple Inc. Product Name: Apple Virtualization Generic Platform Version: 1 Serial Number: Virtualization-c99bfd1d-3630-4c9c-815a-c4aff99a4e9b UUID: 1dfd9bc9-3036-9c4c-815a-c4aff99a4e9b Wake-up Type: Power Switch SKU Number: Not Specified Family: Not Specified (...)

Next let's display the Fedora version and the kernel version.

$ cat /etc/fedora-release Fedora release 35 (Thirty Five) $ uname -a Linux m1 5.14.10-300.fc35.aarch64 #1 SMP Thu Oct 7 20:32:40 UTC 2021 aarch64 aarch64 aarch64 GNU/Linux

With the confirmation of the system out of the way we can begin installing Microshift. The first step is to pin cri-o to the 1.21 version.

$ sudo dnf module enable -y cri-o:1.21 Fedora 35 - aarch64 11 MB/s | 75 MB 00:06 Fedora 35 openh264 (From Cisco) - aarch64 2.0 kB/s | 2.5 kB 00:01 Fedora Modular 35 - aarch64 1.3 MB/s | 3.2 MB 00:02 Fedora 35 - aarch64 - Updates 4.4 MB/s | 31 MB 00:07 Fedora Modular 35 - aarch64 - Updates 2.0 MB/s | 3.7 MB 00:01 Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Enabling module streams: cri-o 1.21 Transaction Summary =================================================================================================================================================================================================================== Complete!

With crio pinned to the version we need it at we can now install the cri-o and cri-tools to the virtual machine. Additional dependencies will also be pulled in during the process.

$ sudo dnf install -y cri-o cri-tools Last metadata expiration check: 0:03:11 ago on Sat 08 Oct 2022 09:04:19 AM CDT. Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Installing: cri-o aarch64 1.21.3-1.module_f35+13330+6bc9c749 updates-modular 21 M cri-tools aarch64 1.19.0-1.module_f35+12974+2bc66b5d updates-modular 5.5 M Installing dependencies: conmon aarch64 2:2.1.0-2.fc35 updates 53 k container-selinux noarch 2:2.169.0-1.fc35 fedora 50 k containernetworking-plugins aarch64 1.1.0-1.fc35 updates 7.9 M containers-common noarch 4:1-45.fc35 updates 76 k criu aarch64 3.16-2.fc35 fedora 511 k fuse-common aarch64 3.10.5-1.fc35 fedora 8.3 k fuse3 aarch64 3.10.5-1.fc35 fedora 54 k fuse3-libs aarch64 3.10.5-1.fc35 fedora 90 k libbsd aarch64 0.10.0-8.fc35 fedora 105 k libnet aarch64 1.2-4.fc35 fedora 60 k libslirp aarch64 4.6.1-2.fc35 fedora 72 k runc aarch64 2:1.1.3-1.fc35 updates 2.8 M socat aarch64 1.7.4.2-1.fc35 updates 300 k Installing weak dependencies: aardvark-dns aarch64 1.0.3-1.fc35 updates 1.0 M fuse-overlayfs aarch64 1.9-1.fc35 updates 67 k netavark aarch64 1.0.3-1.fc35 updates 2.0 M slirp4netns aarch64 1.1.12-2.fc35 fedora 56 k Transaction Summary =================================================================================================================================================================================================================== Install 19 Packages Total download size: 41 M Installed size: 212 M Downloading Packages: (1/19): fuse-common-3.10.5-1.fc35.aarch64.rpm 33 kB/s | 8.3 kB 00:00 (2/19): container-selinux-2.169.0-1.fc35.noarch.rpm 142 kB/s | 50 kB 00:00 (3/19): fuse3-3.10.5-1.fc35.aarch64.rpm 372 kB/s | 54 kB 00:00 (4/19): fuse3-libs-3.10.5-1.fc35.aarch64.rpm 704 kB/s | 90 kB 00:00 (5/19): criu-3.16-2.fc35.aarch64.rpm 959 kB/s | 511 kB 00:00 (6/19): libbsd-0.10.0-8.fc35.aarch64.rpm 700 kB/s | 105 kB 00:00 (7/19): libnet-1.2-4.fc35.aarch64.rpm 752 kB/s | 60 kB 00:00 (8/19): libslirp-4.6.1-2.fc35.aarch64.rpm 955 kB/s | 72 kB 00:00 (9/19): slirp4netns-1.1.12-2.fc35.aarch64.rpm 689 kB/s | 56 kB 00:00 (10/19): conmon-2.1.0-2.fc35.aarch64.rpm 101 kB/s | 53 kB 00:00 (11/19): containers-common-1-45.fc35.noarch.rpm 548 kB/s | 76 kB 00:00 (12/19): fuse-overlayfs-1.9-1.fc35.aarch64.rpm 163 kB/s | 67 kB 00:00 (13/19): aardvark-dns-1.0.3-1.fc35.aarch64.rpm 654 kB/s | 1.0 MB 00:01 (14/19): netavark-1.0.3-1.fc35.aarch64.rpm 449 kB/s | 2.0 MB 00:04 (15/19): socat-1.7.4.2-1.fc35.aarch64.rpm 542 kB/s | 300 kB 00:00 (16/19): runc-1.1.3-1.fc35.aarch64.rpm 522 kB/s | 2.8 MB 00:05 (17/19): containernetworking-plugins-1.1.0-1.fc35.aarch64.rpm 1.0 MB/s | 7.9 MB 00:08 (18/19): cri-tools-1.19.0-1.module_f35+12974+2bc66b5d.aarch64.rpm 2.0 MB/s | 5.5 MB 00:02 (19/19): cri-o-1.21.3-1.module_f35+13330+6bc9c749.aarch64.rpm 4.8 MB/s | 21 MB 00:04 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 3.3 MB/s | 41 MB 00:12 Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Running scriptlet: container-selinux-2:2.169.0-1.fc35.noarch 1/19 Installing : container-selinux-2:2.169.0-1.fc35.noarch 1/19 Running scriptlet: container-selinux-2:2.169.0-1.fc35.noarch 1/19 Installing : containernetworking-plugins-1.1.0-1.fc35.aarch64 2/19 Installing : fuse3-libs-3.10.5-1.fc35.aarch64 3/19 Installing : socat-1.7.4.2-1.fc35.aarch64 4/19 Installing : conmon-2:2.1.0-2.fc35.aarch64 5/19 Installing : aardvark-dns-1.0.3-1.fc35.aarch64 6/19 Installing : netavark-1.0.3-1.fc35.aarch64 7/19 Installing : libslirp-4.6.1-2.fc35.aarch64 8/19 Installing : slirp4netns-1.1.12-2.fc35.aarch64 9/19 Installing : libnet-1.2-4.fc35.aarch64 10/19 Installing : libbsd-0.10.0-8.fc35.aarch64 11/19 Installing : criu-3.16-2.fc35.aarch64 12/19 Installing : runc-2:1.1.3-1.fc35.aarch64 13/19 Installing : fuse-common-3.10.5-1.fc35.aarch64 14/19 Installing : fuse3-3.10.5-1.fc35.aarch64 15/19 Installing : fuse-overlayfs-1.9-1.fc35.aarch64 16/19 Running scriptlet: fuse-overlayfs-1.9-1.fc35.aarch64 16/19 Installing : containers-common-4:1-45.fc35.noarch 17/19 Installing : cri-o-1.21.3-1.module_f35+13330+6bc9c749.aarch64 18/19 Running scriptlet: cri-o-1.21.3-1.module_f35+13330+6bc9c749.aarch64 18/19 Installing : cri-tools-1.19.0-1.module_f35+12974+2bc66b5d.aarch64 19/19 Running scriptlet: container-selinux-2:2.169.0-1.fc35.noarch 19/19 Running scriptlet: cri-tools-1.19.0-1.module_f35+12974+2bc66b5d.aarch64 19/19 Verifying : container-selinux-2:2.169.0-1.fc35.noarch 1/19 Verifying : criu-3.16-2.fc35.aarch64 2/19 Verifying : fuse-common-3.10.5-1.fc35.aarch64 3/19 Verifying : fuse3-3.10.5-1.fc35.aarch64 4/19 Verifying : fuse3-libs-3.10.5-1.fc35.aarch64 5/19 Verifying : libbsd-0.10.0-8.fc35.aarch64 6/19 Verifying : libnet-1.2-4.fc35.aarch64 7/19 Verifying : libslirp-4.6.1-2.fc35.aarch64 8/19 Verifying : slirp4netns-1.1.12-2.fc35.aarch64 9/19 Verifying : aardvark-dns-1.0.3-1.fc35.aarch64 10/19 Verifying : conmon-2:2.1.0-2.fc35.aarch64 11/19 Verifying : containernetworking-plugins-1.1.0-1.fc35.aarch64 12/19 Verifying : containers-common-4:1-45.fc35.noarch 13/19 Verifying : fuse-overlayfs-1.9-1.fc35.aarch64 14/19 Verifying : netavark-1.0.3-1.fc35.aarch64 15/19 Verifying : runc-2:1.1.3-1.fc35.aarch64 16/19 Verifying : socat-1.7.4.2-1.fc35.aarch64 17/19 Verifying : cri-o-1.21.3-1.module_f35+13330+6bc9c749.aarch64 18/19 Verifying : cri-tools-1.19.0-1.module_f35+12974+2bc66b5d.aarch64 19/19 Installed: aardvark-dns-1.0.3-1.fc35.aarch64 conmon-2:2.1.0-2.fc35.aarch64 container-selinux-2:2.169.0-1.fc35.noarch containernetworking-plugins-1.1.0-1.fc35.aarch64 containers-common-4:1-45.fc35.noarch cri-o-1.21.3-1.module_f35+13330+6bc9c749.aarch64 cri-tools-1.19.0-1.module_f35+12974+2bc66b5d.aarch64 criu-3.16-2.fc35.aarch64 fuse-common-3.10.5-1.fc35.aarch64 fuse-overlayfs-1.9-1.fc35.aarch64 fuse3-3.10.5-1.fc35.aarch64 fuse3-libs-3.10.5-1.fc35.aarch64 libbsd-0.10.0-8.fc35.aarch64 libnet-1.2-4.fc35.aarch64 libslirp-4.6.1-2.fc35.aarch64 netavark-1.0.3-1.fc35.aarch64 runc-2:1.1.3-1.fc35.aarch64 slirp4netns-1.1.12-2.fc35.aarch64 socat-1.7.4.2-1.fc35.aarch64 Complete!

After cri-o is installed we can enable it and start it immediately.

$ sudo systemctl enable crio --now Created symlink /etc/systemd/system/multi-user.target.wants/crio.service → /usr/lib/systemd/system/crio.service.

The Microshift packages are in the extra COPR repository so we will need to enable that repository.

$ sudo dnf copr enable -y @redhat-et/microshift /usr/lib/python3.10/site-packages/dnf-plugins/copr.py:433: DeprecationWarning: distro.linux_distribution() is deprecated. It should only be used as a compatibility shim with Python's platform.linux_distribution(). Please use distro.id(), distro.version() and distro.name() instead. dist = linux_distribution() Enabling a Copr repository. Please note that this repository is not part of the main distribution, and quality may vary. The Fedora Project does not exercise any power over the contents of this repository beyond the rules outlined in the Copr FAQ at <https://docs.pagure.org/copr.copr/user_documentation.html#what-i-can-build-in-copr>, and packages are not held to any quality or security level. Please do not file bug reports about these packages in Fedora Bugzilla. In case of problems, contact the owner of this repository. Repository successfully enabled.

Now that the extra repository is enabled we can install the Microshift rpm package and its dependencies on the virtual machine.

$ sudo dnf install -y microshift Copr repo for microshift owned by @redhat-et 4.1 kB/s | 3.5 kB 00:00 Dependencies resolved. =================================================================================================================================================================================================================== Package Architecture Version Repository Size =================================================================================================================================================================================================================== Installing: microshift aarch64 4.8.0-2022_04_20_141053.fc35 copr:copr.fedorainfracloud.org:group_redhat-et:microshift 31 M Upgrading: selinux-policy noarch 35.19-1.fc35 updates 60 k selinux-policy-targeted noarch 35.19-1.fc35 updates 6.3 M Installing dependencies: conntrack-tools aarch64 1.4.5-8.fc35 fedora 201 k libnetfilter_cthelper aarch64 1.0.0-20.fc35 fedora 22 k libnetfilter_cttimeout aarch64 1.0.0-18.fc35 fedora 23 k libnetfilter_queue aarch64 1.0.2-18.fc35 fedora 26 k microshift-selinux noarch 4.8.0-2022_04_20_141053.fc35 copr:copr.fedorainfracloud.org:group_redhat-et:microshift 20 k Transaction Summary =================================================================================================================================================================================================================== Install 6 Packages Upgrade 2 Packages Total download size: 37 M Downloading Packages: (1/8): microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch.rpm 61 kB/s | 20 kB 00:00 (2/8): conntrack-tools-1.4.5-8.fc35.aarch64.rpm 575 kB/s | 201 kB 00:00 (3/8): libnetfilter_cttimeout-1.0.0-18.fc35.aarch64.rpm 227 kB/s | 23 kB 00:00 (4/8): libnetfilter_cthelper-1.0.0-20.fc35.aarch64.rpm 101 kB/s | 22 kB 00:00 (5/8): libnetfilter_queue-1.0.2-18.fc35.aarch64.rpm 247 kB/s | 26 kB 00:00 (6/8): selinux-policy-35.19-1.fc35.noarch.rpm 91 kB/s | 60 kB 00:00 (7/8): microshift-4.8.0-2022_04_20_141053.fc35.aarch64.rpm 13 MB/s | 31 MB 00:02 (8/8): selinux-policy-targeted-35.19-1.fc35.noarch.rpm 920 kB/s | 6.3 MB 00:07 ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Total 4.4 MB/s | 37 MB 00:08 Copr repo for microshift owned by @redhat-et 3.1 kB/s | 1.0 kB 00:00 Importing GPG key 0xC6BC2A0E: Userid : "@redhat-et_microshift (None) <@redhat-et#microshift@copr.fedorahosted.org>" Fingerprint: C950 F16F F4CC 32E0 BCDA 1DF3 4730 C786 C6BC 2A0E From : https://download.copr.fedorainfracloud.org/results/@redhat-et/microshift/pubkey.gpg Key imported successfully Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Running scriptlet: selinux-policy-targeted-35.19-1.fc35.noarch 1/1 Preparing : 1/1 Upgrading : selinux-policy-35.19-1.fc35.noarch 1/10 Running scriptlet: selinux-policy-35.19-1.fc35.noarch 1/10 Running scriptlet: selinux-policy-targeted-35.19-1.fc35.noarch 2/10 Upgrading : selinux-policy-targeted-35.19-1.fc35.noarch 2/10 Running scriptlet: selinux-policy-targeted-35.19-1.fc35.noarch 2/10 Installing : microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch 3/10 Running scriptlet: microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch 3/10 Installing : libnetfilter_queue-1.0.2-18.fc35.aarch64 4/10 Installing : libnetfilter_cttimeout-1.0.0-18.fc35.aarch64 5/10 Installing : libnetfilter_cthelper-1.0.0-20.fc35.aarch64 6/10 Installing : conntrack-tools-1.4.5-8.fc35.aarch64 7/10 Running scriptlet: conntrack-tools-1.4.5-8.fc35.aarch64 7/10 Installing : microshift-4.8.0-2022_04_20_141053.fc35.aarch64 8/10 Running scriptlet: microshift-4.8.0-2022_04_20_141053.fc35.aarch64 8/10 Running scriptlet: selinux-policy-35.3-1.20211019git94970fc.fc35.noarch 9/10 Cleanup : selinux-policy-35.3-1.20211019git94970fc.fc35.noarch 9/10 Running scriptlet: selinux-policy-35.3-1.20211019git94970fc.fc35.noarch 9/10 Cleanup : selinux-policy-targeted-35.3-1.20211019git94970fc.fc35.noarch 10/10 Running scriptlet: selinux-policy-targeted-35.3-1.20211019git94970fc.fc35.noarch 10/10 Running scriptlet: selinux-policy-targeted-35.19-1.fc35.noarch 10/10 Running scriptlet: microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch 10/10 Running scriptlet: selinux-policy-targeted-35.3-1.20211019git94970fc.fc35.noarch 10/10 Verifying : microshift-4.8.0-2022_04_20_141053.fc35.aarch64 1/10 Verifying : microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch 2/10 Verifying : conntrack-tools-1.4.5-8.fc35.aarch64 3/10 Verifying : libnetfilter_cthelper-1.0.0-20.fc35.aarch64 4/10 Verifying : libnetfilter_cttimeout-1.0.0-18.fc35.aarch64 5/10 Verifying : libnetfilter_queue-1.0.2-18.fc35.aarch64 6/10 Verifying : selinux-policy-35.19-1.fc35.noarch 7/10 Verifying : selinux-policy-35.3-1.20211019git94970fc.fc35.noarch 8/10 Verifying : selinux-policy-targeted-35.19-1.fc35.noarch 9/10 Verifying : selinux-policy-targeted-35.3-1.20211019git94970fc.fc35.noarch 10/10 Upgraded: selinux-policy-35.19-1.fc35.noarch selinux-policy-targeted-35.19-1.fc35.noarch Installed: conntrack-tools-1.4.5-8.fc35.aarch64 libnetfilter_cthelper-1.0.0-20.fc35.aarch64 libnetfilter_cttimeout-1.0.0-18.fc35.aarch64 libnetfilter_queue-1.0.2-18.fc35.aarch64 microshift-4.8.0-2022_04_20_141053.fc35.aarch64 microshift-selinux-4.8.0-2022_04_20_141053.fc35.noarch Complete!

Once the Microshift package installation is complete we need to next open a few firewall rules and then enable Microshift to start via systemd and start it.

$ sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent success $ sudo firewall-cmd --zone=public --add-port=80/tcp --permanent success $ sudo firewall-cmd --zone=public --add-port=443/tcp --permanent success $ sudo firewall-cmd --zone=public --add-port=5353/udp --permanent success $ sudo firewall-cmd --reload success $ sudo systemctl enable microshift --now Created symlink /etc/systemd/system/multi-user.target.wants/microshift.service → /usr/lib/systemd/system/microshift.service.

While Microshift is starting we can pull down our oc/kubectl binaries which will enable us to interact with Microshift.

$ curl -O https://mirror.openshift.com/pub/openshift-v4/$(uname -m)/clients/ocp/stable/openshift-client-linux.tar.gz % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 48.4M 100 48.4M 0 0 9425k 0 0:00:05 0:00:05 --:--:-- 12.0M

We can untar the binaries directly into the location where they will be available in our default path.

$ sudo tar -xf openshift-client-linux.tar.gz -C /usr/local/bin oc kubectl

Now lets make a hidden kube directory in our home directory and mirror a copy of the kubeconfig for the Microshift deployment there.

$ mkdir ~/.kube $ sudo cat /var/lib/microshift/resources/kubeadmin/kubeconfig > ~/.kube/config

At this point we should be able to interact with the Microshift deployment using oc commands we are familiar with. In the example below I ran the command a few times to show the pods creating and then their final running state.

$ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-l6v7b 0/1 Init:0/2 0 5s openshift-dns dns-default-4qxss 0/2 ContainerCreating 0 5s openshift-dns node-resolver-k2hkq 0/1 ContainerCreating 0 5s openshift-ingress router-default-85bcfdd948-xgw2l 0/1 Pending 0 9s openshift-service-ca service-ca-7764c85869-nvwcl 0/1 Pending 0 10s $ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-l6v7b 1/1 Running 0 32s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-qdlxs 0/1 ContainerCreating 0 2s openshift-dns dns-default-4qxss 0/2 ContainerCreating 0 32s openshift-dns node-resolver-k2hkq 1/1 Running 0 32s openshift-ingress router-default-85bcfdd948-xgw2l 0/1 Pending 0 36s openshift-service-ca service-ca-7764c85869-nvwcl 0/1 Pending 0 37s $ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-l6v7b 1/1 Running 0 46s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-qdlxs 1/1 Running 0 16s openshift-dns dns-default-4qxss 0/2 ContainerCreating 0 46s openshift-dns node-resolver-k2hkq 1/1 Running 0 46s openshift-ingress router-default-85bcfdd948-xgw2l 0/1 ContainerCreating 0 50s openshift-service-ca service-ca-7764c85869-nvwcl 1/1 Running 0 51s $ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-l6v7b 1/1 Running 0 53s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-qdlxs 1/1 Running 0 23s openshift-dns dns-default-4qxss 0/2 ContainerCreating 0 53s openshift-dns node-resolver-k2hkq 1/1 Running 0 53s openshift-ingress router-default-85bcfdd948-xgw2l 0/1 ContainerCreating 0 57s openshift-service-ca service-ca-7764c85869-nvwcl 1/1 Running 0 58s $ oc get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system kube-flannel-ds-l6v7b 1/1 Running 0 100s kubevirt-hostpath-provisioner kubevirt-hostpath-provisioner-qdlxs 1/1 Running 0 70s openshift-dns dns-default-4qxss 2/2 Running 0 100s openshift-dns node-resolver-k2hkq 1/1 Running 0 100s openshift-ingress router-default-85bcfdd948-xgw2l 1/1 Running 0 104s openshift-service-ca service-ca-7764c85869-nvwcl 1/1 Running 0 105s

Hopefully this gives an idea on how easy deploying Microshift can be and also opens up a lot of possibilities for edge developers to spawn their very own personal development environment of Microshift!