Sunday, March 10, 2024

Biometric Authentication in Fedora 39

 

I have been using a Apple Macintosh M1 Powerbook for a year now and I have grown accustom to using the fingerprint reader to login into the device rather then type my password every time to unlock it.   Which was why when I received a new Lenovo P1 ThinkPad that had Synaptics Prometheus MIS Touch fingerprint reader running Fedora 39, I was intrigued to see if I could get the reader to be used to authenticate my username.

Luckily this process was a little more trivial then I thought it would be since Freedesktop.org has the fprintd daemon.  However I found it a little more distracting finding a comprehensive set of steps.  Further not all fingerprint readers are supported so you have to determine if the driver is supported or not.   In my case my reader was supported.  The rest of this blog will briefly outline the steps on order to successfully authenticate with the fingerprint reader.

The first step is to install the required packages on the system:

$ sudo dnf install fprintd fprintd-pam

Then enable fprintd.service and start it with systemclt. 

$ sudo systemclt start fprintd.service

We can check that the service is running by using systemctl status.  I do want to point out thought that the service will stop after a few tries if no fingerprints are enrolled so do not be alarmed.

$ sudo systemclt status fprintd.service

Next let's erase any old fingerprint data in the event this was used before.  In most cases this step is not required but to be thorough it will not hurt.

$ fprintd-delete $USER

Now we are ready to enroll a fingerprint.  Once you execute the fprintd-enroll service we need to take our index finger and place it on the fingerprint reader multiple times until we get back to a command prompt.

$ fprintd-enroll

We can use the fprintd-verify command confirm our fingerprint works because we make our PAM authentication changes.  When we execute the command we need to put our index finger on the fingerprint reader and it should return to a prompt with an exit status of 0 if successful.

$ fprintd-verify

Now that we have verified our fingerprint works lets make the configuration changes that ensure PAM can use our fingerprint for authentication.

$ sudo authselect enable-feature with-fingerprint
$ sudo authselect apply-changes

We can verify our changes by running the following commands.

$ sudo authselect current

At this point we can go ahead and reboot the laptop.

When we arrive at the login screen click on the username.  Then we get either the classic password login or we can also use the fingerprint reader.  The latter of which should log us in when we place our index finger on the reader.

Again this might be trivial but I found it interesting enough to want to write a quick blog about it to at least capture the experience.  Hopefully this will help someone else who might be struggling trying to get their fingerprint reader to work.


Tuesday, February 27, 2024

Exploring Golang with MQTT: File Transfers

While I've dabbled in writing scripts using Perl, Bash, and Python, I wouldn't necessarily label myself as a developer. However, I do have a penchant for automation, organizing logic, and embracing challenges. It was this innate curiosity that led me to explore Golang recently and experiment with its integration with MQTT.  The harmonious combination between MQTT's lightweight messaging protocol and Go's concurrency model presents a compelling case for utilizing MQTT with Golang.

MQTT (Message Queuing Telemetry Transport) is a lightweight publish-subscribe messaging protocol designed for efficient communication between devices with limited bandwidth and processing capabilities. It excels in scenarios where real-time data exchange is crucial, such as IoT (Internet of Things) applications, telemetry systems, and messaging platforms.

Golang, or Go, is a modern and efficient programming language known for its simplicity, concurrency support, and performance. It provides built-in support for concurrent programming through goroutines and channels, making it ideal for building highly concurrent and scalable systems.

When you combine MQTT with Go, you leverage the strengths of both technologies to create robust, scalable, and real-time communication systems. Here's why this combination is so compelling:

Efficiency: Both MQTT and Go are designed for efficiency. MQTT's lightweight protocol minimizes bandwidth and processing overhead, making it suitable for resource-constrained environments. Go's efficient runtime and concurrency model allow you to handle a large number of concurrent connections and process messages concurrently with minimal overhead.

Concurrency: Go's built-in support for concurrency with goroutines and channels aligns perfectly with MQTT's asynchronous messaging paradigm. You can easily handle thousands of concurrent MQTT connections and process incoming messages concurrently, leveraging the power of parallelism to scale your applications.

Simplicity: MQTT and Go are both known for their simplicity and ease of use. With the paho.mqtt.golang library, integrating MQTT into your Go applications is straightforward and intuitive. You can quickly connect to MQTT brokers, publish and subscribe to topics, and handle messages with minimal boilerplate code.

Scalability: The combination of MQTT and Go enables you to build highly scalable systems that can handle massive workloads with ease. Whether you're building IoT platforms with millions of devices or real-time messaging systems with high throughput requirements, MQTT and Go provide the scalability you need to meet your application's demands.

Fast forward to my little Go project which came out of some of my research around transferring files via MQTT.  While maybe not the most practical I had read of others doing it with Python and even just using the Mosquito publisher and subscriber tools.   But again the goal here was to learn a little about Go and tie it into something that motivated me to figure it out.  Hence my own file transfer publisher and subscriber written in Go.

The publisher code, which can be found here, does the following:

  • Establishes a connection to the broker server with the topic of transfer
  • Watches the following directory on where it is run: /root/outbound
  • Any files that are dropped into the directory are then published to the broker on the topic of transfer
Note: MQTT has a 256MB limit of data it can transfer.  Further it will chunk the data up into segment messages which creates another challenge.

The subscriber code, which can be found here, does the following:

  • Establishes a connection to the broker server with the topic of transfer
  • Listens for any published messages on the topic transfer
  • When a message is published, in this case a file, the subscriber pulls it down and places it into the /root/inbound directory
  • The subscriber will also try to determine the file type by using the mimetype library and looking at the first 512 bytes of the file.  If it cannot be determined it defaults to a .txt extension.
The go.mod file used for the project can be found here.

For the experiment I simply created the two inbound/outbound directories on my system.   In separate terminals I went ahead and ran each of the Go programs.   In a third terminal I set up a watch on the directory listing for inbound.  Then in a fourth terminal I went ahead and used the copy command to place some files into the outbound directory which the publisher code was watching.  The demo is below:

Now for a first pass at this experiment I was fairly pleased but there is definitely room for improvement in the following items:

  • I want to be able to pass as arguments my MQTT server, the topic and what directory to watch.
  • I need to figure out a better way to handle file names on the subscriber side.  Currently all the filenames end up having the name file with a Unix timestamp and then maybe if identified right the correct extension.   One thought I have for this is to actually bundle up the file on the publisher side into a JSON payload where we have the file name, the extension, the size and then the actual data blob.  Then on the subscriber side we would get that file and process it on receipt to obtain the real file name, extension and data of the file.
  • We need to handle large files better since they are chunked up so on the subscriber side we need to be able to take the chunks and assemble them back together and then process them.

Using MQTT with Golang allows you to leverage the lightweight, efficient messaging protocol of MQTT and the concurrency and scalability of Go to build robust, scalable, and real-time communication systems. Whether you're building IoT applications, telemetry systems, or messaging platforms, this combination provides the performance, efficiency, and simplicity you need to succeed.


Monday, February 26, 2024

Near-Field Communication (NFC)

In an era where technology marketing revolves around AI, Edge, Cloud and Kubernetes, it's easy to overlook some of the subtle yet incredibly powerful innovations that have quietly revolutionized the way we interact with our devices and the world around us. Near-Field Communication (NFC) is one such technology, often overshadowed by its more glamorous counterparts. However, its impact is no less profound. Let's explore the wonders of NFC and discover what makes it such a game-changing technology.

My introduction to NFC began while running on a treadmill at the gym. Simply by tapping my iWatch against the NFC logo, I seamlessly paired the two devices. This allowed the treadmill to record my running workout effortlessly to the iWatch. Additionally, my iWatch transmitted my heart rate data to the treadmill in real-time, displaying it on the screen. All of this connectivity occurred through NFC, requiring minimal effort as the devices made contact. But what exactly was this technology called NFC?

Near-Field Communication, as the name suggests, is a short-range wireless communication technology that allows devices to exchange data when they are brought within close proximity, typically a few centimeters. NFC operates on the principle of electromagnetic induction, enabling communication between two NFC-enabled devices or between an NFC device and an NFC tag.

NFC relies on radio frequency identification (RFID) technology, which enables communication between devices by establishing a radio connection when they come into close proximity. Unlike Bluetooth or Wi-Fi, which require pairing and authentication processes, NFC communication is initiated simply by bringing two NFC-enabled devices close together.


NFC devices can operate in three modes: reader/writer mode, peer-to-peer mode, and card emulation mode.

  • Reader/Writer Mode: In this mode, an NFC device acts as a reader or writer, interacting with NFC tags embedded in objects such as posters, labels, or smart cards. The device can read information from the tag or write data to it, enabling various applications such as contactless payments, access control, and information exchange.
  • Peer-to-Peer Mode: In peer-to-peer mode, two NFC-enabled devices can communicate with each other to exchange data. This mode is commonly used for sharing files, photos, videos, or contact information between smartphones, tablets, or other NFC-equipped devices.
  • Card Emulation Mode: In card emulation mode, an NFC-enabled device behaves like an NFC tag, allowing it to be used for contactless transactions or access control. This mode is frequently employed in mobile payment systems, where the device emulates a contactless smart card or payment card, enabling users to make secure transactions using their smartphones or wearable devices.

NFC technology has a wide range of applications across various industries and sectors:

  1. Contactless Payments: NFC-enabled smartphones, smartwatches, and payment cards allow users to make secure transactions by simply tapping their devices on NFC-enabled terminals at checkout counters.
  2. Access Control and Security: NFC tags or cards are used for access control in buildings, parking facilities, public transportation, and events, providing a convenient and secure way to authenticate users and grant them access.
  3. Smart Advertising and Marketing: NFC tags embedded in posters, flyers, product packaging, or retail displays enable interactive marketing campaigns, allowing consumers to access additional information, promotional offers, or multimedia content by tapping their smartphones on the tags.
  4. Transportation and Ticketing: NFC technology is widely used in public transportation systems for contactless ticketing and fare collection, streamlining the boarding process and enhancing passenger convenience.
  5. Healthcare and Wellness: NFC-enabled devices and wearable sensors are utilized in healthcare applications such as patient monitoring, medication adherence, and medical device connectivity, facilitating remote monitoring and personalized healthcare solutions.

Near-Field Communication (NFC) may not always seize the headlines like some of its glitzy counterparts, but its impact on our daily lives is undeniable. From enabling contactless payments and access control to facilitating seamless data exchange and interactive experiences, NFC technology has quietly permeated various aspects of our interconnected world. As we continue to embrace the era of connectivity and digital innovation, NFC stands as a testament to the power of simplicity and proximity in shaping the way we interact with technology and each other.

Tuesday, November 28, 2023

Simplicity of Linux Routing Brings OpenShift Portability

Anyone who has ever done a proof of concept at a customer site knows how daunting it can be. There is allocating the customer's environment from a physical space perspective, power and cooling, and then the elephant in the room: networking. Networking always tends to be the most challenging because the way a customer architects and secures their network varies from each and every customer. Hence, when delivering a proof of concept, wouldn't it be awesome if all we needed was a single ipaddress and uplink for connectivity? Linux has always given us the capability to provide such a simple, elegant solution. It's the very reason why router distros like OPNsense, OpenWRT, pfSense and IPFire are based on Linux. In the following blog, I will review configuring such a state with the idea of providing the simplicity of a single uplink as a proof of concept.

In this example, I wanted to deliver a working Red Hat OpenShift compact cluster that I could bring anywhere. A fourth node acting as the gateway box will also run some infrastructure components with a switch to tie it all together. In the diagram below, we can see the layout of the configuration and how the networking is set up. I should note that this could use four physical boxes, or in my testing, I had all 4 nodes virtualized on a single host. We can see I have an interface enp1s0 on the gateway node that is connected to the upstream network or maybe even the internet depending on circumstances and then another internal interface enp2s0 which is connected to the internal network switch. All the OpenShift nodes are connected to the internal network switch as well. The internal network will never change, but the external network could be anything and could change if we wanted it to. What this means when bringing this setup to another location is I just need to update the enp1s0 interface with the right ipaddress, gateway and external nameserver. Further, to ensure the OpenShift API and ingress wildcards resolve via the external DNS (whatever controls that),  we will just add two records and point them to the enp1s0 interface ipaddress. Nothing changes on the OpenShift cluster nodes or gateway node configurations for DHCP or bind.

The gateway node has Red Hat Enterprise Linux 9.3 installed on it along with DHCP and Bind services both of which are listening only on the internal enp2s0 interface. Below is the dhcpd.conf config I am using.

cat /etc/dhcp/dhcpd.conf
option domain-name "schmaustech.com";
option domain-name-servers 192.168.100.1;
default-lease-time 1200;
max-lease-time 1000;
authoritative;
log-facility local7;

subnet 192.168.100.0 netmask 255.255.255.0 {
        option routers                  192.168.100.1;
        option subnet-mask              255.255.255.0;
        option domain-search            "schmaustech.com";
        option domain-name-servers      192.168.100.1,192.168.100.1;
        option time-offset              -18000;     # Eastern Standard Time
    range   192.168.100.225   192.168.100.240;
        next-server 192.168.100.1;
        if exists user-class and option user-class = "iPXE" {
            filename "ipxe";
        } else {
            filename "pxelinux.0";
        }
        class "httpclients" {
          match if substring (option vendor-class-identifier, 0, 10) = "HTTPClient";
          option vendor-class-identifier "HTTPClient";
          filename "http://192.168.100.246/arm/EFI/BOOT/BOOTAA64.EFI";
    }
}

host adlink-vm1 {
   option host-name "adlink-vm1.schmaustech.com";
   hardware ethernet 52:54:00:89:8d:d8;
   fixed-address 192.168.100.128;
}

host adlink-vm2 {
   option host-name "adlink-vm2.schmaustech.com";
   hardware ethernet 52:54:00:b1:d4:9d;
   fixed-address 192.168.100.129;
}

host adlink-vm3 {
   option host-name "adlink-vm3.schmaustech.com";
   hardware ethernet 52:54:00:5a:69:d1;
   fixed-address 192.168.100.130;
}

host adlink-vm4 {
   option host-name "adlink-vm4.schmaustech.com";
   hardware ethernet 52:54:00:ef:25:04;
   fixed-address 192.168.100.131;
}

host adlink-vm5 {
   option host-name "adlink-vm5.schmaustech.com";
   hardware ethernet 52:54:00:b6:fb:7d;
   fixed-address 192.168.100.132;
}

host adlink-vm6 {
   option host-name "adlink-vm6.schmaustech.com";
   hardware ethernet 52:54:00:09:2e:34;
   fixed-address 192.168.100.133;
}

And the Bind named.conf and schmaustech.com zone files I have configured.

$ cat /etc/named.conf
options {
    listen-on port 53 { 127.0.0.1; 192.168.100.1; };
    listen-on-v6 port 53 { any; };
    forwarders { 192.168.0.10; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    recursing-file  "/var/named/data/named.recursing";
    secroots-file   "/var/named/data/named.secroots";
        allow-query    { any; };
    recursion yes;
    dnssec-enable yes;
    dnssec-validation yes;
    dnssec-lookaside auto;
    bindkeys-file "/etc/named.root.key";
    managed-keys-directory "/var/named/dynamic";
    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
    type hint;
    file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "schmaustech.com" IN {
        type master;
        file "schmaustech.com.zone";
};

zone    "100.168.192.in-addr.arpa" IN {
       type master;
       file "100.168.192.in-addr.arpa";
};

$ cat /var/named/schmaustech.com.zone 
$TTL 1D
@   IN SOA  dns.schmaustech.com   root.dns.schmaustech.com. (
                                       2022121315     ; serial
                                       1D              ; refresh
                                       1H              ; retry
                                       1W              ; expire
                                       3H )            ; minimum

$ORIGIN         schmaustech.com.
schmaustech.com.            IN      NS      dns.schmaustech.com.
dns                     IN      A       192.168.100.1
adlink-vm1    IN    A    192.168.100.128
adlink-vm2    IN    A    192.168.100.129
adlink-vm3    IN    A    192.168.100.130
adlink-vm4    IN    A    192.168.100.131
adlink-vm5    IN    A    192.168.100.132
adlink-vm6    IN    A    192.168.100.133
api.adlink    IN    A    192.168.100.134
api-int.adlink    IN    A    192.168.100.134
*.apps.adlink    IN    A    192.168.100.135

In order to have the proper network address translation and service redirection we need to modify the default firewalld configuration on the gateway box.

First let's go ahead and see what the active zone is with firewalld. We will find that both interfaces are in the public zone which is the default.

$ sudo firewall-cmd --get-active-zone
public
  interfaces: enp2s0 enp1s0

We will first set our two interfaces to variables to make the rest of the commands easy to follow. Interface enp1s0 will be set to external and enp2s0 will be set to internal. Then we will go ahead and create an internal zone. Note we do not need to create an external zone because one exists by default with firewalld. We can then assign the interfaces to their respective zones.

$ sudo EXTERNAL=enp1s0
$ sudo INTERNAL=enp2s0

$ sudo firewall-cmd --set-default-zone=internal
success

$ sudo firewall-cmd --change-interface=$EXTERNAL --zone=external --permanent
The interface is under control of NetworkManager, setting zone to 'external'.
success

$ sudo firewall-cmd --change-interface=$INTERNAL --zone=internal --permanent
The interface is under control of NetworkManager, setting zone to 'internal'.
success

Next we can enable masquerading between the zones. We will find that by default masquerading was enabled for the external zone. However if one chose different zone names we need to point out that both need to be set.

$ sudo firewall-cmd --zone=external --add-masquerade --permanent
Warning: ALREADY_ENABLED: masquerade
success

$ sudo firewall-cmd --zone=internal --add-masquerade --permanent
success

Now we can add the rules to forward traffic between zones.

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 nat POSTROUTING 0 -o $EXTERNAL -j MASQUERADE
success

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -i $INTERNAL -o $EXTERNAL -j ACCEPT
success

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -i $EXTERNAL -o $INTERNAL -m state --state RELATED,ESTABLISHED -j ACCEPT
success

At this point let's go ahead and reload our firewall and show the active zones again. Now we should see our interfaces are in their proper zones and active.

$ sudo firewall-cmd --reload
success

$ sudo firewall-cmd --get-active-zone
external
  interfaces: enp1s0
internal
  interfaces: enp2s0

If we look at each zone we can see the default configuration that currently exists for each zone.

$ sudo firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources:
  services: ssh
  ports:
  protocols:
  forward: no
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

$ sudo firewall-cmd --list-all --zone=internal
internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources:
  services: cockpit dhcpv6-client mdns samba-client ssh
  ports:
  protocols:
  forward: no
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

The zones need to be updated for OpenShift so we can ensure any external traffic bound for https and port 6443 is sent to the OpenShift ingress virtual ipaddress and OpenShift api virual ipaddress respectively. We also need to allow for DNS resolution traffic internally outbound on the internal zone so we can resolve anything outside of our OpenShift environment dns records (like registry.redhat.io).

$ sudo firewall-cmd --permanent --zone=external --add-service=https
success
$ sudo firewall-cmd --permanent --zone=internal --add-service=https
success
$ sudo firewall-cmd --permanent --zone=external --add-forward-port=port=443:proto=tcp:toport=443:toaddr=192.168.100.135
success
$ sudo firewall-cmd --permanent --zone=external --add-port=6443/tcp
success
$ sudo firewall-cmd --permanent --zone=internal --add-port=6443/tcp
success
$ sudo firewall-cmd --permanent --zone=external --add-forward-port=port=6443:proto=tcp:toport=6443:toaddr=192.168.100.134
success
$ sudo firewall-cmd --permanent --zone=internal --add-service=dns
success
$ sudo firewall-cmd --reload
success

After we reloaded our configuration let's take a look at the external and internal zones to validate our changes took place.

$ sudo firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: https ssh
  ports: 6443/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
    port=443:proto=tcp:toport=443:toaddr=192.168.100.135
    port=6443:proto=tcp:toport=6443:toaddr=192.168.100.134
  source-ports: 
  icmp-blocks: 
  rich rules:

$ sudo firewall-cmd --list-all --zone=internal
internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources: 
  services: cockpit dhcpv6-client dns https mdns samba-client ssh
  ports: 6443/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Up to this point we would have a working setup if we were on Red Hat Enterprise Linux 8.x. However there were changes made with Red Hat Enterprise Linux 9.x and hence we need to add a internal to external policy to ensure proper ingress/egress traffic flow.

$ sudo firewall-cmd --permanent --new-policy policy_int_to_ext
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --add-ingress-zone internal
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --add-egress-zone external
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --set-priority 100
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --set-target ACCEPT
success
$ sudo firewall-cmd --reload
success

Let's take a quick look at the policies we set to confirm it is there.

$ sudo firewall-cmd --info-policy=policy_int_to_ext
policy_int_to_ext (active)
  priority: 100
  target: ACCEPT
  ingress-zones: internal
  egress-zones: external
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Now that we have completed the firewalld configuration we should be ready to deploy OpenShift. Since I have written about deploying OpenShift quite a bit in my past I won't go into the detailed steps here. I will point out that I did use Red Hat Assisted Installer at https://cloud.redhat.com

Once the OpenShift installation has completed we can pull down the kubeconfig and run a few commands to show its operations and how its networking is configured on the nodes:

% oc get nodes -o wide
NAME                         STATUS   ROLES                         AGE     VERSION           INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
adlink-vm4.schmaustech.com   Ready    control-plane,master,worker   2d23h   v1.27.6+f67aeb3   192.168.100.131   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9
adlink-vm5.schmaustech.com   Ready    control-plane,master,worker   2d23h   v1.27.6+f67aeb3   192.168.100.132   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9
adlink-vm6.schmaustech.com   Ready    control-plane,master,worker   2d22h   v1.27.6+f67aeb3   192.168.100.133   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9

We can see from the above output the nodes are running on the 192.168.100.0/24 network which is our internal network. However if we ping from my Mac to api.adlink.schmaustech.com we can see the response is coming from 192.168.0.75 which just happens to be the interface on enp1s0 of our gateway box. We can also see any ingress names like console-openshift-console.apps.adlink.schmaustech.com also resolve to the 192.168.0.75 address.

% ping api.adlink.schmaustech.com -t 1
PING api.adlink.schmaustech.com (192.168.0.75): 56 data bytes
64 bytes from 192.168.0.75: icmp_seq=0 ttl=63 time=4.242 ms

--- api.adlink.schmaustech.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 4.242/4.242/4.242/0.000 ms

% ping console-openshift-console.apps.adlink.schmaustech.com -t 1
PING console-openshift-console.apps.adlink.schmaustech.com (192.168.0.75): 56 data bytes
64 bytes from 192.168.0.75: icmp_seq=0 ttl=63 time=2.946 ms

--- console-openshift-console.apps.adlink.schmaustech.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.946/2.946/2.946/nan ms

Finally, if we curl the OpenShift console from my Mac, we can see we also get a 200 response, so the console is accessible from outside the private network OpenShift is installed on.

% curl -k -I https://console-openshift-console.apps.adlink.schmaustech.com
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=+gglOP1AF2FjXsZ4E61xa53Dtagem8u5qFTG08ukPD6GnulryLllm7SQplizT51X5Huzqf4LTU47t7yzdCaL5g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 28 Nov 2023 22:13:14 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=d15c9d1648c3a0f52dcf8c1991ce2d19; path=/; HttpOnly; Secure; SameSite=None

Hopefully this blog was helpful in explaining how one can reduce the headaches of networking when it comes to providing a proof of concept of OpenShift that needs to be portable and yet simple without reinstalling OpenShift. Using stock Red Hat Enterprise Linux and firewalld makes it pretty easy to build a NAT gateway and still forward specific traffic to expose what is required. Further, it makes it quite easy for me to carve up a single host and bring it to any one of my friends houses for OpenShift Night.

Monday, November 06, 2023

Is Edge Really a New Concept?

 

In 1984, John Gage from Sun Microsystems coined the phrase "The Network is the Computer".  In making the statement, he was putting a stake into the ground that computers should be networked otherwise they are not utilizing their full potential.   Ever since then, people have been connecting their servers, desktops and small devices to the network to provide connectivity and compute to a variety of locations for varying business purposes.

Take, for example, when I worked at BAE Systems back in the 2008-2010 period.   We already had remote unmanned sites where we had compute that was ingesting data from tests and sensors.  Further, we had to ensure that data was kept integral for compliance and business reasons.  Developing an architecture around this to ensure reliable operation and resiliency was no small feat.   It involved the integration solution of multiple products to ensure the systems were monitored, the data was stored locally, backed up, deduplicated and then transferred offsite via the network for a remote stored copy.  No small feat given some of these sites only had a T1 for connectivity.  However, it was a feat we were able to accomplish and did it all without using the ever popular "edge" marketing moniker.

Fast forward today and all the rage is on edge, edge workloads and edge management.  As a marketing tool, the use of the word "edge" has become synonymous with making decisions closer to where a business needs them made.   But I was already doing that back in 2008-2010 at BAE Systems.

The story marketing departments and product owners are missing is that, in order for me to do what I did back then, it took a highly technical resource to architect and build out the solution.   In today's world, many businesses do not have the luxury of those skilled resources to take the building blocks to build such systems.  These businesses, in various industries, are looking for turnkey solutions that will allow them to achieve what I did years ago in a quick and cost efficient manner while leveraging potentially non-technical staff.  However, the integration of what I did into a turnkey product that is universally palatable across differing industries and customers seems daunting.

Businesses vary in how they define edge and what they are doing at the edge.   Take, for example, connectivity.   In some edge use cases like my BAE Systems story or even retail, connectivity is usually fairly consistent and always there.  However, for some edge use cases like mining where vehicles might have the edge systems onboard, the connectivity could be intermittent or be dynamic in that the ip address of the device might change during the course of operation.   This makes the old push model method and telemetry data gathering more difficult because the once known ip address could have changed and yet the central collector system back in the datacenter has no idea about the devices new ip address identity.    Edge, in this case, requires a different mindset when approaching the problem.   Instead of using a push or pull model, a better solution would be leveraging a message broker architecture like the one below.

In the architecture above, I leverage an agent on our edge device that subscribes and publishes to a MQTT broker and on the server side I do the same.  That way, neither side needs to be aware of the other end's network topology, which is ideal when the edge devices might be roaming and changing.   This also gives us the ability to scale the MQTT broker via a content delivery network so we can take it globally.  Not to mention, the use of a message broker also provides a bonus of being able to allow the business to subscribe to it, enabling further data manipulation and enhancing business logic flexibility.

Besides rethinking the current technological challenges at the edge, we also have to rethink the user experience.   The user experience needs to be easy to instantiate and consume.   In the architecture above, I provided both a UI and an API.   This provides the user with both an initial UI experience to help them understand how the product operates but also an easy way to do everyday tasks.  Again, this is needed because not everyone using the product will have technical abilities, so it has to be easy and consumable.   The video below shows a demonstration of how to do an upgrade of a device from the UI.  The UI will use the message broker architecture to make the upgrade happen on a device.  In the demo, I also show on the bottom left a terminal screen of what is happening on the device as the upgrade is rolling out.   I also provide a console view of the device on the lower right so we can view when the device is rebooted.


After watching the demo, it becomes apparent that the ease of use and simple requests is a must for our non-technical consumers at the edge.  Also, as I mentioned above, I do have an API, so one could write automation against this if the business has those resources available.  The bottom line, though, is that it has to be easy and intuitive.

Summarizing what we just covered, let's recognize edge is not a new concept in the computing world.  It has existed since the time computers were able to be networked together.   Edge in itself is a difficult term to define given the variances of how different industries and the businesses within them consume edge.   However, what should be apparent is the need to simplify and streamline how edge solutions are designed given that many edge scenarios involve the use of non-technical staff.   If a technology vendor can solve this challenge either on their own or with a few partners, then they will own the market.

Thursday, September 14, 2023

MQTT, Telemetry, The Edge

When we hear the term edge, depending on who we are and what experiences we have had, we tend to think of many different scenarios.  However one of the main themes in all of those scenarios, besides the fact that edge is usually outside of the data center and filled with potential physical and environmental constraints, is the need to capture telemetry data from all of those devices.  The need to understand the state of the systems out in the wild and more importantly to be able to capture more detail in the event the edge device goes sideways.   Now the sheer numbers of fleet devices will produce a plethora of data points and given we might have network constraints we have to be cognizant of how to deliver all that data back to our central repository for compliance and visibility.   This blog will explore the possibilities of MQTT providing a solution to this voluminous problem. 

For those not familiar with MQTT, it is a protocol developed back in 1999.  The main requirement for the protocol was the transfer of data in networks with low bandwidth and intermittent connections.  MQTT was developed primarily for system to system interaction which makes it ideal for connecting devices in IoT networks for either control action, data exchange or even device performance.  Further it implements a bi-directional message transmission so a device can receive and send payloads to other devices all without knowing those other devices network details.   Perfect for use cases like planes, trains and automobiles where the ipaddress state might be dynamic and change.

MQTT has three primary "edgy" features:

  • Lightweight
  • Easy to implement and operate
  • Architecture of a publisher-subscriber model
Let's explore a bit about each of these features.   First its lightweight and that means the protocol is able to work on low-power devices like microcontrollers, single board computers to systems on chip (SoC).  This is definitely important since some of these devices are small and operate on battery power.   The lightweight aspect also imposes minimal requirements and costs on the data moved across the network.  This quality is provided by a small service data header and a small amount of actual payload data transmitted.  And while the maximum size of the transmitted data in MQTT could be 256Mb, usually data packets only contain a few hundred bytes at a time.

The second feature of MQTT is the simplicity of the implementation and operations.   Because MQTT is a binary protocol which does not impose restrictions on the format of the data transmitted,  the engineer is free to decide what the structure and format of the data.  It can be a number of formats like plain text, csv or even the common JSON format.   The format is really dependent on the requirements of the solution being built  and the medium the data transmission rides across.  Along with the openness of how the data is transmitted the protocol has both control packets to establish and control the connection along with a mechanism based on TCP to ensure guaranteed delivery.

Finally the architecture of MQTT differs from other classic client server configurations in that it implements a publisher-subscriber model where clients can do both but do not communicate directly with other clients and are not aware of each others existence on the network.  The interaction of the clients and the transfer of the data they send is handled by an intermediary called a message broker.  The advantages of this model are:
  • Asynchronous operation ensuring there is no blocking while waiting for messages
  • Network agnostic in that the clients work with the network without knowing the topology
  • Horizontal scalability which is important when thinking of 10k to 100k devices
  • Security protection from scanning because each client is unaware of the other clients IP/MAC
Overall the combination of the primary "edgy" features makes MQTT an ideal transport protocol for large amounts of clients needing to send a variety of data in various formats.   Thus making MQTT attractive in the edge space for device communication.


MQTT could also be perfect for telemetry data at the edge and to demonstrate the concept we can think about edge from an automobile perspective.  Modern cars have hundreds of digital and analog sensors built into them which generate thousands of data points in a high volume of frequency.  These data points are in turn dumped as a broadcast onto a vehicles Controlled Area Network(CAN) data bus which in turn could be listened to with a logger or MQTT client to record all of the messages they are sending.  The telemetry data itself can be divided into three general categories:
  • Vehicle parameters
  • Environmental parameters
  • Physical parameters of the driver
The collection of these data points in those sub categories enables manufacturers and users of the vehicle to achieve goals like monitoring, increased safety of the driver, increased fuel efficiency, time to resolution on service diagnosis and even in some cases the state of the driver themselves.

Given the sheer volume of the data and the need to structure it in some way compounded by the number of cars on the road MQTT provides a great way to horizontally scale and structure data.  The design details will be derived based on requirements of the telemetry needs and where constraints might exist along the path to obtaining the data points.

Take for example how we might structure the data for MQTT from the automobile sensors.   In one case we could use MQTTs topic structure and have a state for each item we want to measure and transmit:
  
schmausautos_telemetry_service/car_VIN/sensor/parameter/state

schmausautos_telemetry_service/5T1BF30K44U067947/engine/rpm/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/temperature/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/fuel/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/oxygen/state

schmausautos_telemetry_service/5T1BF30K44U067947/geo/latitude/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/longitude/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/elevation/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/speed/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/temperature/state

This option relies on MQTTs ability to create a semantic structure of topics.  Each topic is specific to a particular sensor and can be accessed individually without the need to pull additional data. The advantage of this option is that both the client and broker can transmit and access respectively the indicators of interest.   This reduces the amount of transmitted data which reduces the load on the network.   An appropriate option where wireless coverage is weak and/or intermittent but parameter control is required because transmitting a few bytes of parameter data is easier then a full dump of data.

A second option for the same type of data might be using the JSON data format and combining all of the sensor data into a single hierarchical message.   Thus when accessing the specific vehicles topic the whole of all vehicle data is passed in a key pair value format.  The advantage of this method is that all parameters are available on a single request.  However because of this and the potential for large data sized messages it will increase load on the network.   Further it will also require something to serialize and deserialize the JSON string at he client ends of the MQTT interchange.   This method is more useful when there is a reliable network connection and coverage. 

schmausautos_telemetry_service/car_VIN/state

{
  engine: {
   rpm: 5000,
   temperature: 90,
   fuel: 80,
   oxygen: 70,
  },
  geo: {
   latitude: 45.0101248,
   longitude: -93.0414592,
   elevation: 2000,
   speed: 60,
   temperature: 65,
  },
  ...
}

Either option again based on constraints in the requirements could be valid and useful.  But overall they show the flexibility of MQTT and its ability to handle both the sheer scale and the amount of telemtry data coming in from the vehicles multiple sensors and sources multiplied by the number of vehicles in the fleet.

Hopefully this blog provided some insight into MQTT and its use for telemetry at the edge.  MQTT while an old protocol was designed from the beginning for these edge type use cases.  Use cases that require low power consumption, easy of operation and flexibility to consume and present data in many formats.  And while we explored using MQTT as a method for telemetry data there are certainly more uses for MQTT in the edge space.

Tuesday, August 15, 2023

Bandwidth Limiting at The Edge


Recently I worked with a customer concerned about bandwidth, image replication and their edge locations. The customers concerns were warranted because they wanted to mirror a large set of software images to the edge sites but the connectivity to those edge sites while consistent was not necessarily the best for moving large data. Further to compound the problem the connectivity also was shared for other data transmitting services that the edge site relied on during daily business operations. The customer initially requested we add bandwidth capabilities to the software tooling that would be moving the images to the site. While at first glance this would seem to solve the issue it made me realize this might not be a scalable or efficient solution as tools change or as other software requirements for data movement evolve. Understanding the customers requirements and limitations at hand I approach this problem using some tools that are already built into Red Hat Device Edge and Red Hat OpenShift Container Platform. The rest of this blog will explore and demonstrate those options depending on the customers use case being: kubernetes container, non kubernetes container or a standard vanilla process on Linux.

OpenShift Pod Bandwidth Limiting

For OpenShift limiting ingress/egress bandwidth is fairly straight forward given Kubernetes traffic shaping capabilities. In the examples below we will run a basic Red Hat Universal Base Image container two different ways. One way will have no bandwidth restrictions and the other one will have bandwidth restrictions. Then inside each running container we can issue a curl command pulling the same file and see how the behavior differs. It is assumed this container would be the application container issuing the commands at the customer edge location.

Below lets create the normal pod running with no restrictions by first creating the custom resource file and then creating it on the OpenShift cluster.

$ cat << EOF > ~/edgepod-normal.yaml kind: Pod apiVersion: v1 metadata: name: edgepod-normal namespace: default labels: run: edgepod-normal spec: restartPolicy: Always containers: - resources: {} stdin: true terminationMessagePath: /dev/termination-log stdinOnce: true name: testpod-normal imagePullPolicy: Always terminationMessagePolicy: File tty: true image: registry.redhat.io/ubi9/ubi:latest args: - sh EOF $ oc create -f ~/edgepod-normal.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "testpod-normal" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod-normal" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod-normal" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod-normal" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/edgepod-normal created $ oc get pods NAME READY STATUS RESTARTS AGE edgepod-normal 1/1 Running 0 5s

Now that we have our normal container running let's go ahead and create the same custom resource file and container with bandwidth restrictions. The custom resource file will be identical to the original one with the exception of the annotations we add around bandwidth for ingress and egress. For the bandwidth we will be restricting to 10M in this example.

$ cat << EOF > ~/edgepod-slow.yaml kind: Pod apiVersion: v1 metadata: name: edgepod-slow namespace: default labels: run: edgepod-normal annotations: { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } spec: restartPolicy: Always containers: - resources: {} stdin: true terminationMessagePath: /dev/termination-log stdinOnce: true name: testpod-normal imagePullPolicy: Always terminationMessagePolicy: File tty: true image: registry.redhat.io/ubi9/ubi:latest args: - sh EOF $ oc create -f ~/edgepod-slow.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "testpod-normal" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod-normal" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod-normal" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod-normal" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/edgepod-slow created $ oc get pods NAME READY STATUS RESTARTS AGE edgepod-normal 1/1 Running 0 4m14s edgepod-slow 1/1 Running 0 3s

Now that both containers are up running let's go into edgepod-normal and run our baseline test curl command.

$ oc exec -it edgepod-normal /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. [root@edgepod-normal /]# curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 115M 0 0:00:07 0:00:07 --:--:-- 128M

We can see from the results above that we were able to transfer the 909M file in ~7 seconds with a speed of 128M. Let's run the same command inside our edgepod-slow pod.

$ oc exec -it edgepod-slow /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. [root@edgepod-slow /]# curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 1107k 0 0:14:01 0:14:01 --:--:-- 1151k

We can see from the results above that our annotation for bandwidth restricted the container to roughly a 10M speed and it took ~14 minutes to transfer the same 909M file. So if we think back to my customers use case this could be an option for restricting a containers traffic if they are using OpenShift.

Red Hat Enterprise Linux Bandwidth Limiting

In the previous section we looked at how OpenShift can bandwidth limit certain containers running in the cluster. Since the edge has a variety of customers and use cases let's explore how to do the same bandwidth restrictions from a non-kubernetes perspective. We will be using Traffic Control (tc) which is a very useful Linux utility that gives us the ability to control and shape traffic in the kernel. This tool normally ships with a variety of Linux distributions. In our demonstration environment we will be using Red Hat Enterprise Linux 9 since that is the host I have up and running.

First let's go ahead and create a container called edgepod using the ubi9 image.

$ podman run -itd --name edgepod ubi9 bash Resolved "ubi9" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi9:latest... Getting image source signatures Checking if image destination supports signatures Copying blob d6427437202d done Copying config 05936a40cf done Writing manifest to image destination Storing signatures 906716d99a39c5fc11373739a8aa20e192b348d0aaab2680775fe6ccc4dc00c3

Now let's go ahead and validate that the container is up and running.

$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 906716d99a39 registry.access.redhat.com/ubi9:latest bash 8 seconds ago Up 9 seconds edgepod

Once the container is up and running let's go ahead and run a baseline image pull inside the container to confirm how long it takes to pull the image. We will use the same image we pulled in the OpenShift example above for test.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 52.8M 0 0:00:17 0:00:17 --:--:-- 52.9M ~

We can see from the results above that it took again about ~17 seconds to bring the 909M image over from the source. Now keep in mind this is our baseline.

Next we need to configure the Intermediate Functional Block (ifb) interface on the Red Hat Enterprise Linux host. The ifb pseudo network interface acts as a QoS concentrator for multiple different sources of traffic. We need to use this because tc only works on egress traffic on a real interface and the traffic we are trying to slow down is ingress traffic. To get started we need to load the module into the kernel. We will set the numifbs to one because the default is two and I just need one for my single interface. Once we load the module we can then set the link of the device to up and then confirm the interface is running.

$ sudo modprobe ifb numifbs=1 $ sudo ip link set dev ifb0 up $ sudo ip address show ifb0 5: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 32 link/ether b6:5c:67:99:2c:82 brd ff:ff:ff:ff:ff:ff inet6 fe80::b45c:67ff:fe99:2c82/64 scope link valid_lft forever preferred_lft forever

Now that the ifb interface is up we need to go ahead and apply some tc rules. The rules are performing the following functions in order:

  • Create an egress filter on the ifb0 device
  • Add root class htb with rate limiting of 1mbps
  • Create a matchall filter to classify all the traffic that runs on the port
  • Create ingress on external interface enp1s0
  • Forward all ingress traffic from enp1s0 to the ifb0 device
$ sudo tc qdisc add dev ifb0 root handle 1: htb r2q 1 $ sudo tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbps $ sudo tc filter add dev ifb0 parent 1: matchall flowid 1:1 $ sudo tc qdisc add dev enp1s0 handle ffff: ingress $ sudo tc filter add dev enp1s0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0

Now we have our bandwidth limiting capabilities configured let's run our test again and see our results.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 933k 0 0:16:38 0:16:38 --:--:-- 872k

We can see that with our tc rules applied the image was transferred at a much slower rate as expected which would again ensure we are not using all the bandwidth if this were an edge site. Now this might leave some wondering isn't this being applied to the whole system. The answer is yes but if there is not a system wide requirement and only maybe a certain job or task that needs to be rate limited we could wrap the commands into a script, execute the process at hand (our curl command in this example) and then remove the rules with the commands below.

$ tc qdisc del dev enp1s0 ingress $ tc qdisc del dev ifb0 root

And for sanity sake let's just run our command one more time to confirm we returned to baseline speeds.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 52.0M 0 0:00:17 0:00:17 --:--:-- 46.9M

Hopefully this gives anyone working in edge environments with constrained bandwidth requirements ideas on how they can control certain processes and/or containers from using all the available bandwidth on the edge link. There are obviously a lot of other ways to use these concepts to further enable the most efficient use of the bandwidth availability at the edge but we will save that for some other time.