Tuesday, November 28, 2023

Simplicity of Linux Routing Brings OpenShift Portability

Anyone who has ever done a proof of concept at a customer site knows how daunting it can be. There is allocating the customer's environment from a physical space perspective, power and cooling, and then the elephant in the room: networking. Networking always tends to be the most challenging because the way a customer architects and secures their network varies from each and every customer. Hence, when delivering a proof of concept, wouldn't it be awesome if all we needed was a single ipaddress and uplink for connectivity? Linux has always given us the capability to provide such a simple, elegant solution. It's the very reason why router distros like OPNsense, OpenWRT, pfSense and IPFire are based on Linux. In the following blog, I will review configuring such a state with the idea of providing the simplicity of a single uplink as a proof of concept.

In this example, I wanted to deliver a working Red Hat OpenShift compact cluster that I could bring anywhere. A fourth node acting as the gateway box will also run some infrastructure components with a switch to tie it all together. In the diagram below, we can see the layout of the configuration and how the networking is set up. I should note that this could use four physical boxes, or in my testing, I had all 4 nodes virtualized on a single host. We can see I have an interface enp1s0 on the gateway node that is connected to the upstream network or maybe even the internet depending on circumstances and then another internal interface enp2s0 which is connected to the internal network switch. All the OpenShift nodes are connected to the internal network switch as well. The internal network will never change, but the external network could be anything and could change if we wanted it to. What this means when bringing this setup to another location is I just need to update the enp1s0 interface with the right ipaddress, gateway and external nameserver. Further, to ensure the OpenShift API and ingress wildcards resolve via the external DNS (whatever controls that),  we will just add two records and point them to the enp1s0 interface ipaddress. Nothing changes on the OpenShift cluster nodes or gateway node configurations for DHCP or bind.

The gateway node has Red Hat Enterprise Linux 9.3 installed on it along with DHCP and Bind services both of which are listening only on the internal enp2s0 interface. Below is the dhcpd.conf config I am using.

cat /etc/dhcp/dhcpd.conf
option domain-name "schmaustech.com";
option domain-name-servers 192.168.100.1;
default-lease-time 1200;
max-lease-time 1000;
authoritative;
log-facility local7;

subnet 192.168.100.0 netmask 255.255.255.0 {
        option routers                  192.168.100.1;
        option subnet-mask              255.255.255.0;
        option domain-search            "schmaustech.com";
        option domain-name-servers      192.168.100.1,192.168.100.1;
        option time-offset              -18000;     # Eastern Standard Time
    range   192.168.100.225   192.168.100.240;
        next-server 192.168.100.1;
        if exists user-class and option user-class = "iPXE" {
            filename "ipxe";
        } else {
            filename "pxelinux.0";
        }
        class "httpclients" {
          match if substring (option vendor-class-identifier, 0, 10) = "HTTPClient";
          option vendor-class-identifier "HTTPClient";
          filename "http://192.168.100.246/arm/EFI/BOOT/BOOTAA64.EFI";
    }
}

host adlink-vm1 {
   option host-name "adlink-vm1.schmaustech.com";
   hardware ethernet 52:54:00:89:8d:d8;
   fixed-address 192.168.100.128;
}

host adlink-vm2 {
   option host-name "adlink-vm2.schmaustech.com";
   hardware ethernet 52:54:00:b1:d4:9d;
   fixed-address 192.168.100.129;
}

host adlink-vm3 {
   option host-name "adlink-vm3.schmaustech.com";
   hardware ethernet 52:54:00:5a:69:d1;
   fixed-address 192.168.100.130;
}

host adlink-vm4 {
   option host-name "adlink-vm4.schmaustech.com";
   hardware ethernet 52:54:00:ef:25:04;
   fixed-address 192.168.100.131;
}

host adlink-vm5 {
   option host-name "adlink-vm5.schmaustech.com";
   hardware ethernet 52:54:00:b6:fb:7d;
   fixed-address 192.168.100.132;
}

host adlink-vm6 {
   option host-name "adlink-vm6.schmaustech.com";
   hardware ethernet 52:54:00:09:2e:34;
   fixed-address 192.168.100.133;
}

And the Bind named.conf and schmaustech.com zone files I have configured.

$ cat /etc/named.conf
options {
    listen-on port 53 { 127.0.0.1; 192.168.100.1; };
    listen-on-v6 port 53 { any; };
    forwarders { 192.168.0.10; };
    directory     "/var/named";
    dump-file     "/var/named/data/cache_dump.db";
    statistics-file "/var/named/data/named_stats.txt";
    memstatistics-file "/var/named/data/named_mem_stats.txt";
    recursing-file  "/var/named/data/named.recursing";
    secroots-file   "/var/named/data/named.secroots";
        allow-query    { any; };
    recursion yes;
    dnssec-enable yes;
    dnssec-validation yes;
    dnssec-lookaside auto;
    bindkeys-file "/etc/named.root.key";
    managed-keys-directory "/var/named/dynamic";
    pid-file "/run/named/named.pid";
    session-keyfile "/run/named/session.key";
};

logging {
        channel default_debug {
                file "data/named.run";
                severity dynamic;
        };
};

zone "." IN {
    type hint;
    file "named.ca";
};

include "/etc/named.rfc1912.zones";
include "/etc/named.root.key";

zone "schmaustech.com" IN {
        type master;
        file "schmaustech.com.zone";
};

zone    "100.168.192.in-addr.arpa" IN {
       type master;
       file "100.168.192.in-addr.arpa";
};

$ cat /var/named/schmaustech.com.zone 
$TTL 1D
@   IN SOA  dns.schmaustech.com   root.dns.schmaustech.com. (
                                       2022121315     ; serial
                                       1D              ; refresh
                                       1H              ; retry
                                       1W              ; expire
                                       3H )            ; minimum

$ORIGIN         schmaustech.com.
schmaustech.com.            IN      NS      dns.schmaustech.com.
dns                     IN      A       192.168.100.1
adlink-vm1    IN    A    192.168.100.128
adlink-vm2    IN    A    192.168.100.129
adlink-vm3    IN    A    192.168.100.130
adlink-vm4    IN    A    192.168.100.131
adlink-vm5    IN    A    192.168.100.132
adlink-vm6    IN    A    192.168.100.133
api.adlink    IN    A    192.168.100.134
api-int.adlink    IN    A    192.168.100.134
*.apps.adlink    IN    A    192.168.100.135

In order to have the proper network address translation and service redirection we need to modify the default firewalld configuration on the gateway box.

First let's go ahead and see what the active zone is with firewalld. We will find that both interfaces are in the public zone which is the default.

$ sudo firewall-cmd --get-active-zone
public
  interfaces: enp2s0 enp1s0

We will first set our two interfaces to variables to make the rest of the commands easy to follow. Interface enp1s0 will be set to external and enp2s0 will be set to internal. Then we will go ahead and create an internal zone. Note we do not need to create an external zone because one exists by default with firewalld. We can then assign the interfaces to their respective zones.

$ sudo EXTERNAL=enp1s0
$ sudo INTERNAL=enp2s0

$ sudo firewall-cmd --set-default-zone=internal
success

$ sudo firewall-cmd --change-interface=$EXTERNAL --zone=external --permanent
The interface is under control of NetworkManager, setting zone to 'external'.
success

$ sudo firewall-cmd --change-interface=$INTERNAL --zone=internal --permanent
The interface is under control of NetworkManager, setting zone to 'internal'.
success

Next we can enable masquerading between the zones. We will find that by default masquerading was enabled for the external zone. However if one chose different zone names we need to point out that both need to be set.

$ sudo firewall-cmd --zone=external --add-masquerade --permanent
Warning: ALREADY_ENABLED: masquerade
success

$ sudo firewall-cmd --zone=internal --add-masquerade --permanent
success

Now we can add the rules to forward traffic between zones.

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 nat POSTROUTING 0 -o $EXTERNAL -j MASQUERADE
success

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -i $INTERNAL -o $EXTERNAL -j ACCEPT
success

$ sudo firewall-cmd --direct --permanent --add-rule ipv4 filter FORWARD 0 -i $EXTERNAL -o $INTERNAL -m state --state RELATED,ESTABLISHED -j ACCEPT
success

At this point let's go ahead and reload our firewall and show the active zones again. Now we should see our interfaces are in their proper zones and active.

$ sudo firewall-cmd --reload
success

$ sudo firewall-cmd --get-active-zone
external
  interfaces: enp1s0
internal
  interfaces: enp2s0

If we look at each zone we can see the default configuration that currently exists for each zone.

$ sudo firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources:
  services: ssh
  ports:
  protocols:
  forward: no
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

$ sudo firewall-cmd --list-all --zone=internal
internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources:
  services: cockpit dhcpv6-client mdns samba-client ssh
  ports:
  protocols:
  forward: no
  masquerade: yes
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:

The zones need to be updated for OpenShift so we can ensure any external traffic bound for https and port 6443 is sent to the OpenShift ingress virtual ipaddress and OpenShift api virual ipaddress respectively. We also need to allow for DNS resolution traffic internally outbound on the internal zone so we can resolve anything outside of our OpenShift environment dns records (like registry.redhat.io).

$ sudo firewall-cmd --permanent --zone=external --add-service=https
success
$ sudo firewall-cmd --permanent --zone=internal --add-service=https
success
$ sudo firewall-cmd --permanent --zone=external --add-forward-port=port=443:proto=tcp:toport=443:toaddr=192.168.100.135
success
$ sudo firewall-cmd --permanent --zone=external --add-port=6443/tcp
success
$ sudo firewall-cmd --permanent --zone=internal --add-port=6443/tcp
success
$ sudo firewall-cmd --permanent --zone=external --add-forward-port=port=6443:proto=tcp:toport=6443:toaddr=192.168.100.134
success
$ sudo firewall-cmd --permanent --zone=internal --add-service=dns
success
$ sudo firewall-cmd --reload
success

After we reloaded our configuration let's take a look at the external and internal zones to validate our changes took place.

$ sudo firewall-cmd --list-all --zone=external
external (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp1s0
  sources: 
  services: https ssh
  ports: 6443/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
    port=443:proto=tcp:toport=443:toaddr=192.168.100.135
    port=6443:proto=tcp:toport=6443:toaddr=192.168.100.134
  source-ports: 
  icmp-blocks: 
  rich rules:

$ sudo firewall-cmd --list-all --zone=internal
internal (active)
  target: default
  icmp-block-inversion: no
  interfaces: enp2s0
  sources: 
  services: cockpit dhcpv6-client dns https mdns samba-client ssh
  ports: 6443/tcp
  protocols: 
  forward: yes
  masquerade: yes
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Up to this point we would have a working setup if we were on Red Hat Enterprise Linux 8.x. However there were changes made with Red Hat Enterprise Linux 9.x and hence we need to add a internal to external policy to ensure proper ingress/egress traffic flow.

$ sudo firewall-cmd --permanent --new-policy policy_int_to_ext
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --add-ingress-zone internal
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --add-egress-zone external
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --set-priority 100
success
$ sudo firewall-cmd --permanent --policy policy_int_to_ext --set-target ACCEPT
success
$ sudo firewall-cmd --reload
success

Let's take a quick look at the policies we set to confirm it is there.

$ sudo firewall-cmd --info-policy=policy_int_to_ext
policy_int_to_ext (active)
  priority: 100
  target: ACCEPT
  ingress-zones: internal
  egress-zones: external
  services: 
  ports: 
  protocols: 
  masquerade: no
  forward-ports: 
  source-ports: 
  icmp-blocks: 
  rich rules:

Now that we have completed the firewalld configuration we should be ready to deploy OpenShift. Since I have written about deploying OpenShift quite a bit in my past I won't go into the detailed steps here. I will point out that I did use Red Hat Assisted Installer at https://cloud.redhat.com

Once the OpenShift installation has completed we can pull down the kubeconfig and run a few commands to show its operations and how its networking is configured on the nodes:

% oc get nodes -o wide
NAME                         STATUS   ROLES                         AGE     VERSION           INTERNAL-IP       EXTERNAL-IP   OS-IMAGE                                                       KERNEL-VERSION                  CONTAINER-RUNTIME
adlink-vm4.schmaustech.com   Ready    control-plane,master,worker   2d23h   v1.27.6+f67aeb3   192.168.100.131   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9
adlink-vm5.schmaustech.com   Ready    control-plane,master,worker   2d23h   v1.27.6+f67aeb3   192.168.100.132   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9
adlink-vm6.schmaustech.com   Ready    control-plane,master,worker   2d22h   v1.27.6+f67aeb3   192.168.100.133   <none>        Red Hat Enterprise Linux CoreOS 414.92.202311061957-0 (Plow)   5.14.0-284.40.1.el9_2.aarch64   cri-o://1.27.1-13.1.rhaos4.14.git956c5f7.el9

We can see from the above output the nodes are running on the 192.168.100.0/24 network which is our internal network. However if we ping from my Mac to api.adlink.schmaustech.com we can see the response is coming from 192.168.0.75 which just happens to be the interface on enp1s0 of our gateway box. We can also see any ingress names like console-openshift-console.apps.adlink.schmaustech.com also resolve to the 192.168.0.75 address.

% ping api.adlink.schmaustech.com -t 1
PING api.adlink.schmaustech.com (192.168.0.75): 56 data bytes
64 bytes from 192.168.0.75: icmp_seq=0 ttl=63 time=4.242 ms

--- api.adlink.schmaustech.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 4.242/4.242/4.242/0.000 ms

% ping console-openshift-console.apps.adlink.schmaustech.com -t 1
PING console-openshift-console.apps.adlink.schmaustech.com (192.168.0.75): 56 data bytes
64 bytes from 192.168.0.75: icmp_seq=0 ttl=63 time=2.946 ms

--- console-openshift-console.apps.adlink.schmaustech.com ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 2.946/2.946/2.946/nan ms

Finally, if we curl the OpenShift console from my Mac, we can see we also get a 200 response, so the console is accessible from outside the private network OpenShift is installed on.

% curl -k -I https://console-openshift-console.apps.adlink.schmaustech.com
HTTP/1.1 200 OK
referrer-policy: strict-origin-when-cross-origin
set-cookie: csrf-token=+gglOP1AF2FjXsZ4E61xa53Dtagem8u5qFTG08ukPD6GnulryLllm7SQplizT51X5Huzqf4LTU47t7yzdCaL5g==; Path=/; Secure; SameSite=Lax
x-content-type-options: nosniff
x-dns-prefetch-control: off
x-frame-options: DENY
x-xss-protection: 1; mode=block
date: Tue, 28 Nov 2023 22:13:14 GMT
content-type: text/html; charset=utf-8
set-cookie: 1e2670d92730b515ce3a1bb65da45062=d15c9d1648c3a0f52dcf8c1991ce2d19; path=/; HttpOnly; Secure; SameSite=None

Hopefully this blog was helpful in explaining how one can reduce the headaches of networking when it comes to providing a proof of concept of OpenShift that needs to be portable and yet simple without reinstalling OpenShift. Using stock Red Hat Enterprise Linux and firewalld makes it pretty easy to build a NAT gateway and still forward specific traffic to expose what is required. Further, it makes it quite easy for me to carve up a single host and bring it to any one of my friends houses for OpenShift Night.

Monday, November 06, 2023

Is Edge Really a New Concept?

 

In 1984, John Gage from Sun Microsystems coined the phrase "The Network is the Computer".  In making the statement, he was putting a stake into the ground that computers should be networked otherwise they are not utilizing their full potential.   Ever since then, people have been connecting their servers, desktops and small devices to the network to provide connectivity and compute to a variety of locations for varying business purposes.

Take, for example, when I worked at BAE Systems back in the 2008-2010 period.   We already had remote unmanned sites where we had compute that was ingesting data from tests and sensors.  Further, we had to ensure that data was kept integral for compliance and business reasons.  Developing an architecture around this to ensure reliable operation and resiliency was no small feat.   It involved the integration solution of multiple products to ensure the systems were monitored, the data was stored locally, backed up, deduplicated and then transferred offsite via the network for a remote stored copy.  No small feat given some of these sites only had a T1 for connectivity.  However, it was a feat we were able to accomplish and did it all without using the ever popular "edge" marketing moniker.

Fast forward today and all the rage is on edge, edge workloads and edge management.  As a marketing tool, the use of the word "edge" has become synonymous with making decisions closer to where a business needs them made.   But I was already doing that back in 2008-2010 at BAE Systems.

The story marketing departments and product owners are missing is that, in order for me to do what I did back then, it took a highly technical resource to architect and build out the solution.   In today's world, many businesses do not have the luxury of those skilled resources to take the building blocks to build such systems.  These businesses, in various industries, are looking for turnkey solutions that will allow them to achieve what I did years ago in a quick and cost efficient manner while leveraging potentially non-technical staff.  However, the integration of what I did into a turnkey product that is universally palatable across differing industries and customers seems daunting.

Businesses vary in how they define edge and what they are doing at the edge.   Take, for example, connectivity.   In some edge use cases like my BAE Systems story or even retail, connectivity is usually fairly consistent and always there.  However, for some edge use cases like mining where vehicles might have the edge systems onboard, the connectivity could be intermittent or be dynamic in that the ip address of the device might change during the course of operation.   This makes the old push model method and telemetry data gathering more difficult because the once known ip address could have changed and yet the central collector system back in the datacenter has no idea about the devices new ip address identity.    Edge, in this case, requires a different mindset when approaching the problem.   Instead of using a push or pull model, a better solution would be leveraging a message broker architecture like the one below.

In the architecture above, I leverage an agent on our edge device that subscribes and publishes to a MQTT broker and on the server side I do the same.  That way, neither side needs to be aware of the other end's network topology, which is ideal when the edge devices might be roaming and changing.   This also gives us the ability to scale the MQTT broker via a content delivery network so we can take it globally.  Not to mention, the use of a message broker also provides a bonus of being able to allow the business to subscribe to it, enabling further data manipulation and enhancing business logic flexibility.

Besides rethinking the current technological challenges at the edge, we also have to rethink the user experience.   The user experience needs to be easy to instantiate and consume.   In the architecture above, I provided both a UI and an API.   This provides the user with both an initial UI experience to help them understand how the product operates but also an easy way to do everyday tasks.  Again, this is needed because not everyone using the product will have technical abilities, so it has to be easy and consumable.   The video below shows a demonstration of how to do an upgrade of a device from the UI.  The UI will use the message broker architecture to make the upgrade happen on a device.  In the demo, I also show on the bottom left a terminal screen of what is happening on the device as the upgrade is rolling out.   I also provide a console view of the device on the lower right so we can view when the device is rebooted.


After watching the demo, it becomes apparent that the ease of use and simple requests is a must for our non-technical consumers at the edge.  Also, as I mentioned above, I do have an API, so one could write automation against this if the business has those resources available.  The bottom line, though, is that it has to be easy and intuitive.

Summarizing what we just covered, let's recognize edge is not a new concept in the computing world.  It has existed since the time computers were able to be networked together.   Edge in itself is a difficult term to define given the variances of how different industries and the businesses within them consume edge.   However, what should be apparent is the need to simplify and streamline how edge solutions are designed given that many edge scenarios involve the use of non-technical staff.   If a technology vendor can solve this challenge either on their own or with a few partners, then they will own the market.

Thursday, September 14, 2023

MQTT, Telemetry, The Edge

When we hear the term edge, depending on who we are and what experiences we have had, we tend to think of many different scenarios.  However one of the main themes in all of those scenarios, besides the fact that edge is usually outside of the data center and filled with potential physical and environmental constraints, is the need to capture telemetry data from all of those devices.  The need to understand the state of the systems out in the wild and more importantly to be able to capture more detail in the event the edge device goes sideways.   Now the sheer numbers of fleet devices will produce a plethora of data points and given we might have network constraints we have to be cognizant of how to deliver all that data back to our central repository for compliance and visibility.   This blog will explore the possibilities of MQTT providing a solution to this voluminous problem. 

For those not familiar with MQTT, it is a protocol developed back in 1999.  The main requirement for the protocol was the transfer of data in networks with low bandwidth and intermittent connections.  MQTT was developed primarily for system to system interaction which makes it ideal for connecting devices in IoT networks for either control action, data exchange or even device performance.  Further it implements a bi-directional message transmission so a device can receive and send payloads to other devices all without knowing those other devices network details.   Perfect for use cases like planes, trains and automobiles where the ipaddress state might be dynamic and change.

MQTT has three primary "edgy" features:

  • Lightweight
  • Easy to implement and operate
  • Architecture of a publisher-subscriber model
Let's explore a bit about each of these features.   First its lightweight and that means the protocol is able to work on low-power devices like microcontrollers, single board computers to systems on chip (SoC).  This is definitely important since some of these devices are small and operate on battery power.   The lightweight aspect also imposes minimal requirements and costs on the data moved across the network.  This quality is provided by a small service data header and a small amount of actual payload data transmitted.  And while the maximum size of the transmitted data in MQTT could be 256Mb, usually data packets only contain a few hundred bytes at a time.

The second feature of MQTT is the simplicity of the implementation and operations.   Because MQTT is a binary protocol which does not impose restrictions on the format of the data transmitted,  the engineer is free to decide what the structure and format of the data.  It can be a number of formats like plain text, csv or even the common JSON format.   The format is really dependent on the requirements of the solution being built  and the medium the data transmission rides across.  Along with the openness of how the data is transmitted the protocol has both control packets to establish and control the connection along with a mechanism based on TCP to ensure guaranteed delivery.

Finally the architecture of MQTT differs from other classic client server configurations in that it implements a publisher-subscriber model where clients can do both but do not communicate directly with other clients and are not aware of each others existence on the network.  The interaction of the clients and the transfer of the data they send is handled by an intermediary called a message broker.  The advantages of this model are:
  • Asynchronous operation ensuring there is no blocking while waiting for messages
  • Network agnostic in that the clients work with the network without knowing the topology
  • Horizontal scalability which is important when thinking of 10k to 100k devices
  • Security protection from scanning because each client is unaware of the other clients IP/MAC
Overall the combination of the primary "edgy" features makes MQTT an ideal transport protocol for large amounts of clients needing to send a variety of data in various formats.   Thus making MQTT attractive in the edge space for device communication.


MQTT could also be perfect for telemetry data at the edge and to demonstrate the concept we can think about edge from an automobile perspective.  Modern cars have hundreds of digital and analog sensors built into them which generate thousands of data points in a high volume of frequency.  These data points are in turn dumped as a broadcast onto a vehicles Controlled Area Network(CAN) data bus which in turn could be listened to with a logger or MQTT client to record all of the messages they are sending.  The telemetry data itself can be divided into three general categories:
  • Vehicle parameters
  • Environmental parameters
  • Physical parameters of the driver
The collection of these data points in those sub categories enables manufacturers and users of the vehicle to achieve goals like monitoring, increased safety of the driver, increased fuel efficiency, time to resolution on service diagnosis and even in some cases the state of the driver themselves.

Given the sheer volume of the data and the need to structure it in some way compounded by the number of cars on the road MQTT provides a great way to horizontally scale and structure data.  The design details will be derived based on requirements of the telemetry needs and where constraints might exist along the path to obtaining the data points.

Take for example how we might structure the data for MQTT from the automobile sensors.   In one case we could use MQTTs topic structure and have a state for each item we want to measure and transmit:
  
schmausautos_telemetry_service/car_VIN/sensor/parameter/state

schmausautos_telemetry_service/5T1BF30K44U067947/engine/rpm/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/temperature/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/fuel/state
schmausautos_telemetry_service/5T1BF30K44U067947/engine/oxygen/state

schmausautos_telemetry_service/5T1BF30K44U067947/geo/latitude/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/longitude/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/elevation/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/speed/state
schmausautos_telemetry_service/5T1BF30K44U067947/geo/temperature/state

This option relies on MQTTs ability to create a semantic structure of topics.  Each topic is specific to a particular sensor and can be accessed individually without the need to pull additional data. The advantage of this option is that both the client and broker can transmit and access respectively the indicators of interest.   This reduces the amount of transmitted data which reduces the load on the network.   An appropriate option where wireless coverage is weak and/or intermittent but parameter control is required because transmitting a few bytes of parameter data is easier then a full dump of data.

A second option for the same type of data might be using the JSON data format and combining all of the sensor data into a single hierarchical message.   Thus when accessing the specific vehicles topic the whole of all vehicle data is passed in a key pair value format.  The advantage of this method is that all parameters are available on a single request.  However because of this and the potential for large data sized messages it will increase load on the network.   Further it will also require something to serialize and deserialize the JSON string at he client ends of the MQTT interchange.   This method is more useful when there is a reliable network connection and coverage. 

schmausautos_telemetry_service/car_VIN/state

{
  engine: {
   rpm: 5000,
   temperature: 90,
   fuel: 80,
   oxygen: 70,
  },
  geo: {
   latitude: 45.0101248,
   longitude: -93.0414592,
   elevation: 2000,
   speed: 60,
   temperature: 65,
  },
  ...
}

Either option again based on constraints in the requirements could be valid and useful.  But overall they show the flexibility of MQTT and its ability to handle both the sheer scale and the amount of telemtry data coming in from the vehicles multiple sensors and sources multiplied by the number of vehicles in the fleet.

Hopefully this blog provided some insight into MQTT and its use for telemetry at the edge.  MQTT while an old protocol was designed from the beginning for these edge type use cases.  Use cases that require low power consumption, easy of operation and flexibility to consume and present data in many formats.  And while we explored using MQTT as a method for telemetry data there are certainly more uses for MQTT in the edge space.

Tuesday, August 15, 2023

Bandwidth Limiting at The Edge


Recently I worked with a customer concerned about bandwidth, image replication and their edge locations. The customers concerns were warranted because they wanted to mirror a large set of software images to the edge sites but the connectivity to those edge sites while consistent was not necessarily the best for moving large data. Further to compound the problem the connectivity also was shared for other data transmitting services that the edge site relied on during daily business operations. The customer initially requested we add bandwidth capabilities to the software tooling that would be moving the images to the site. While at first glance this would seem to solve the issue it made me realize this might not be a scalable or efficient solution as tools change or as other software requirements for data movement evolve. Understanding the customers requirements and limitations at hand I approach this problem using some tools that are already built into Red Hat Device Edge and Red Hat OpenShift Container Platform. The rest of this blog will explore and demonstrate those options depending on the customers use case being: kubernetes container, non kubernetes container or a standard vanilla process on Linux.

OpenShift Pod Bandwidth Limiting

For OpenShift limiting ingress/egress bandwidth is fairly straight forward given Kubernetes traffic shaping capabilities. In the examples below we will run a basic Red Hat Universal Base Image container two different ways. One way will have no bandwidth restrictions and the other one will have bandwidth restrictions. Then inside each running container we can issue a curl command pulling the same file and see how the behavior differs. It is assumed this container would be the application container issuing the commands at the customer edge location.

Below lets create the normal pod running with no restrictions by first creating the custom resource file and then creating it on the OpenShift cluster.

$ cat << EOF > ~/edgepod-normal.yaml kind: Pod apiVersion: v1 metadata: name: edgepod-normal namespace: default labels: run: edgepod-normal spec: restartPolicy: Always containers: - resources: {} stdin: true terminationMessagePath: /dev/termination-log stdinOnce: true name: testpod-normal imagePullPolicy: Always terminationMessagePolicy: File tty: true image: registry.redhat.io/ubi9/ubi:latest args: - sh EOF $ oc create -f ~/edgepod-normal.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "testpod-normal" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod-normal" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod-normal" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod-normal" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/edgepod-normal created $ oc get pods NAME READY STATUS RESTARTS AGE edgepod-normal 1/1 Running 0 5s

Now that we have our normal container running let's go ahead and create the same custom resource file and container with bandwidth restrictions. The custom resource file will be identical to the original one with the exception of the annotations we add around bandwidth for ingress and egress. For the bandwidth we will be restricting to 10M in this example.

$ cat << EOF > ~/edgepod-slow.yaml kind: Pod apiVersion: v1 metadata: name: edgepod-slow namespace: default labels: run: edgepod-normal annotations: { "kubernetes.io/ingress-bandwidth": "10M", "kubernetes.io/egress-bandwidth": "10M" } spec: restartPolicy: Always containers: - resources: {} stdin: true terminationMessagePath: /dev/termination-log stdinOnce: true name: testpod-normal imagePullPolicy: Always terminationMessagePolicy: File tty: true image: registry.redhat.io/ubi9/ubi:latest args: - sh EOF $ oc create -f ~/edgepod-slow.yaml Warning: would violate PodSecurity "restricted:latest": allowPrivilegeEscalation != false (container "testpod-normal" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (container "testpod-normal" must set securityContext.capabilities.drop=["ALL"]), runAsNonRoot != true (pod or container "testpod-normal" must set securityContext.runAsNonRoot=true), seccompProfile (pod or container "testpod-normal" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") pod/edgepod-slow created $ oc get pods NAME READY STATUS RESTARTS AGE edgepod-normal 1/1 Running 0 4m14s edgepod-slow 1/1 Running 0 3s

Now that both containers are up running let's go into edgepod-normal and run our baseline test curl command.

$ oc exec -it edgepod-normal /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. [root@edgepod-normal /]# curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 115M 0 0:00:07 0:00:07 --:--:-- 128M

We can see from the results above that we were able to transfer the 909M file in ~7 seconds with a speed of 128M. Let's run the same command inside our edgepod-slow pod.

$ oc exec -it edgepod-slow /bin/bash kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead. [root@edgepod-slow /]# curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 1107k 0 0:14:01 0:14:01 --:--:-- 1151k

We can see from the results above that our annotation for bandwidth restricted the container to roughly a 10M speed and it took ~14 minutes to transfer the same 909M file. So if we think back to my customers use case this could be an option for restricting a containers traffic if they are using OpenShift.

Red Hat Enterprise Linux Bandwidth Limiting

In the previous section we looked at how OpenShift can bandwidth limit certain containers running in the cluster. Since the edge has a variety of customers and use cases let's explore how to do the same bandwidth restrictions from a non-kubernetes perspective. We will be using Traffic Control (tc) which is a very useful Linux utility that gives us the ability to control and shape traffic in the kernel. This tool normally ships with a variety of Linux distributions. In our demonstration environment we will be using Red Hat Enterprise Linux 9 since that is the host I have up and running.

First let's go ahead and create a container called edgepod using the ubi9 image.

$ podman run -itd --name edgepod ubi9 bash Resolved "ubi9" as an alias (/etc/containers/registries.conf.d/001-rhel-shortnames.conf) Trying to pull registry.access.redhat.com/ubi9:latest... Getting image source signatures Checking if image destination supports signatures Copying blob d6427437202d done Copying config 05936a40cf done Writing manifest to image destination Storing signatures 906716d99a39c5fc11373739a8aa20e192b348d0aaab2680775fe6ccc4dc00c3

Now let's go ahead and validate that the container is up and running.

$ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 906716d99a39 registry.access.redhat.com/ubi9:latest bash 8 seconds ago Up 9 seconds edgepod

Once the container is up and running let's go ahead and run a baseline image pull inside the container to confirm how long it takes to pull the image. We will use the same image we pulled in the OpenShift example above for test.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 52.8M 0 0:00:17 0:00:17 --:--:-- 52.9M ~

We can see from the results above that it took again about ~17 seconds to bring the 909M image over from the source. Now keep in mind this is our baseline.

Next we need to configure the Intermediate Functional Block (ifb) interface on the Red Hat Enterprise Linux host. The ifb pseudo network interface acts as a QoS concentrator for multiple different sources of traffic. We need to use this because tc only works on egress traffic on a real interface and the traffic we are trying to slow down is ingress traffic. To get started we need to load the module into the kernel. We will set the numifbs to one because the default is two and I just need one for my single interface. Once we load the module we can then set the link of the device to up and then confirm the interface is running.

$ sudo modprobe ifb numifbs=1 $ sudo ip link set dev ifb0 up $ sudo ip address show ifb0 5: ifb0: <BROADCAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 32 link/ether b6:5c:67:99:2c:82 brd ff:ff:ff:ff:ff:ff inet6 fe80::b45c:67ff:fe99:2c82/64 scope link valid_lft forever preferred_lft forever

Now that the ifb interface is up we need to go ahead and apply some tc rules. The rules are performing the following functions in order:

  • Create an egress filter on the ifb0 device
  • Add root class htb with rate limiting of 1mbps
  • Create a matchall filter to classify all the traffic that runs on the port
  • Create ingress on external interface enp1s0
  • Forward all ingress traffic from enp1s0 to the ifb0 device
$ sudo tc qdisc add dev ifb0 root handle 1: htb r2q 1 $ sudo tc class add dev ifb0 parent 1: classid 1:1 htb rate 1mbps $ sudo tc filter add dev ifb0 parent 1: matchall flowid 1:1 $ sudo tc qdisc add dev enp1s0 handle ffff: ingress $ sudo tc filter add dev enp1s0 parent ffff: protocol ip u32 match u32 0 0 action mirred egress redirect dev ifb0

Now we have our bandwidth limiting capabilities configured let's run our test again and see our results.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 933k 0 0:16:38 0:16:38 --:--:-- 872k

We can see that with our tc rules applied the image was transferred at a much slower rate as expected which would again ensure we are not using all the bandwidth if this were an edge site. Now this might leave some wondering isn't this being applied to the whole system. The answer is yes but if there is not a system wide requirement and only maybe a certain job or task that needs to be rate limited we could wrap the commands into a script, execute the process at hand (our curl command in this example) and then remove the rules with the commands below.

$ tc qdisc del dev enp1s0 ingress $ tc qdisc del dev ifb0 root

And for sanity sake let's just run our command one more time to confirm we returned to baseline speeds.

$ podman exec -it edgepod curl http://192.168.0.29/images/discovery_image_agx.iso -o test.iso % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 909M 100 909M 0 0 52.0M 0 0:00:17 0:00:17 --:--:-- 46.9M

Hopefully this gives anyone working in edge environments with constrained bandwidth requirements ideas on how they can control certain processes and/or containers from using all the available bandwidth on the edge link. There are obviously a lot of other ways to use these concepts to further enable the most efficient use of the bandwidth availability at the edge but we will save that for some other time.

Wednesday, May 10, 2023

Experimental FIPS on Red Hat Device Edge with MicroShift

Federal agencies purchasing cryptographic-based security systems must confirm an associated FIPS 140-2 or FIPS 140-3 certificate exists. This procurement “check-box” can definitely be a deal breaker. The claim of “designed for FIPS” or “FIPS ready” are not sufficient to pass this requirment. If FIPS certification does not exist it will often mean there will be no sale with the vendor of choice. Many commercial and private organizations will also perceive a product as having an advantage when paired with the FIPS certification. From a Red Hat Exterprise Linux perspective whether a product is FIPS certified or not can be found in the following knowledge base article. The rest of this blog will detail how to technically install and show that FIPS is in use with a Red Hat Device Edge 9.2 rpm-ostree image that contains MicroShift 4.13.  Bear in mind as indicated in the above knowledge base article Red Hat Enterprise Linux 9.2 still does need to go through the proper 140-3 FIPS certification process.  What is provided here is merely to demonstrate forward looking thinking when FIPS compliance is achieved.

To get familiar with Red Hat Device Edge and MicroShift please reference the following blog post. We will use the same steps to build and produce the images for our FIPS experiment from that previous blog. The only difference will be that instead of using Red Hat Device Edge 8.7 and MicroShift 4.12 we are using the newer releases of those components. Keep in mind though that for builingd the rpm-ostree image we need to reposync down the updated Fast Data Path and Red Hat OpenShift repositories for the new versions. Also for packing the rpm-ostree into the boot.iso with the recook script we have to update the references of 8.7 to 9.2 which include boot.iso name and disk labels. Otherwise everything else can use the same logical steps.

FIPS mode on Red Hat Device Edge begins just like with Red Hat Enterprise Linux at install time when we boot the media that will allow us to install the system. In our case we will take the rhde-ztp.iso image we created in the blog referenced above and use that as our deployment image on our device. When we power on the device however we need to edit the kernel boot arguments and add the following fips=1 argument at the end.

linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64:/ks.cfg fips=1

We could also do this permanently in our grub.cfg file we create in the blog referenced above before we generate the rhde-ztp.iso.

set default="1"

function load_video {
  insmod efi_gop
  insmod efi_uga
  insmod video_bochs
  insmod video_cirrus
  insmod all_video
}

load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2

set timeout=60
### END /etc/grub.d/00_header ###

search --no-floppy --set=root -l 'RHEL-9-2-0-BaseOS-x86_64'

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Install Red Hat Enterprise Linux 9.2' --class fedora --class gnu-linux --class gnu --class os {
	linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64:/ks.cfg fips=1
	initrdefi /images/pxeboot/initrd.img
}
menuentry 'Test this media & install Red Hat Enterprise Linux 9.2' --class fedora --class gnu-linux --class gnu --class os {
	linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 rd.live.check fips=1
	initrdefi /images/pxeboot/initrd.img
}
submenu 'Troubleshooting -->' {
	menuentry 'Install Red Hat Enterprise Linux 9.2 in text mode' --class fedora --class gnu-linux --class gnu --class os {
		linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.text quiet
		initrdefi /images/pxeboot/initrd.img
	}
	menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os {
		linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.rescue quiet
		initrdefi /images/pxeboot/initrd.img
	}
}

Once the system boots from the rhde-ztp.iso with the FIPS argument the installation will proceed using the appropriate FIPS cryptography libraries and the rpm-ostree image will be applied to the device. Once the system reboots, MicroShift will start automatically and we can begin the process of verifying that indeed FIPS mode is enabled not only at the Red Hat Device Edge layer but also within a container that is deployed on MicroShift.

Now let's begin validating that FIPS is indeed running on the edge device. First lets login to the host and check from the Red Hat Device Edge perspective:

# cat /etc/redhat-release
Red Hat Enterprise Linux release 9.2 (Plow)
# fips-mode-setup --check
FIPS mode is enabled.

From the OS level FIPS is indeed enabled. Now let's also look at the openssl libraries installed on the edge device:

# rpm -qa|grep openssl
openssl-libs-3.0.7-6.el9_2.x86_64
openssl-3.0.7-6.el9_2.x86_64
xmlsec1-openssl-1.2.29-9.el9.x86_64

Next let's confirm MicroShift is up and running on the FIPS enabled edge device.

# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
# oc get pods -A
NAMESPACE                  NAME                                 READY   STATUS    RESTARTS      AGE
openshift-dns              dns-default-pmncq                    2/2     Running   0             13h
openshift-dns              node-resolver-qh67d                  1/1     Running   0             13h
openshift-ingress          router-default-6857569799-njdsx      1/1     Running   0             13h
openshift-ovn-kubernetes   ovnkube-master-s5dpl                 4/4     Running   0             13h
openshift-ovn-kubernetes   ovnkube-node-m2bv6                   1/1     Running   1 (13h ago)   13h
openshift-service-ca       service-ca-7f49b8c7f5-rbsgf          1/1     Running   0             13h
openshift-storage          topolvm-controller-f58fcd7cb-6sggd   4/4     Running   0             13h
openshift-storage          topolvm-node-ddkhj                   4/4     Running   0             13h

Next let's create a simple deployment yaml referencing the nodejs minimal image from Red Hat:

# cat << EOF > ~/node-ubi.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-deployment
  namespace: simple
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simple-deployment
    type: Recreate
  template:
    metadata:
      labels:
        app: simple-deployment
        deploymentconfig: simple-deployment
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
      - image: registry.access.redhat.com/ubi8/nodejs-16-minimal
        imagePullPolicy: Always
        name: simple-deployment
        command:
        - /bin/sh
        - -c
        - |
          sleep infinity
        resources: {}
EOF

Now we can create a namespace and then deploy the simple deployment yaml we created onto MicroShift:

# oc create namespace simple
namespace/simple created
# oc create -f node-ubi.yaml 
deployment.apps/simple-deployment created

Let's verify our simple deployment nodejs pod is running:

# oc get pods -A
NAMESPACE                  NAME                                 READY   STATUS    RESTARTS      AGE
openshift-dns              dns-default-pmncq                    2/2     Running   0             13h
openshift-dns              node-resolver-qh67d                  1/1     Running   0             13h
openshift-ingress          router-default-6857569799-njdsx      1/1     Running   0             13h
openshift-ovn-kubernetes   ovnkube-master-s5dpl                 4/4     Running   0             13h
openshift-ovn-kubernetes   ovnkube-node-m2bv6                   1/1     Running   1 (13h ago)   13h
openshift-service-ca       service-ca-7f49b8c7f5-rbsgf          1/1     Running   0             13h
openshift-storage          topolvm-controller-f58fcd7cb-6sggd   4/4     Running   0             13h
openshift-storage          topolvm-node-ddkhj                   4/4     Running   0             13h
simple                     simple-deployment-66b9457cb9-v22vj   1/1     Running   0             54s

With the pod running we now need to get into a bash prompt in the simple deployment nodejs pod:

# oc exec -it simple-deployment-66b9457cb9-v22vj -n simple /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$ 

From the command prompt in the pod let's first check the openssl libraries:

bash-4.4$ rpm -qa|grep openssl
openssl-libs-1.1.1k-9.el8_7.x86_64
openssl-1.1.1k-9.el8_7.x86_64

Now let's run a few commands to show that FIPS is indeed enabled in the OpenSSL libraries:

bash-4.4$ openssl version
OpenSSL 1.1.1k  FIPS 25 Mar 2021
bash-4.4$ node -p 'crypto.getFips()'
1
bash-4.4$ node -p 'crypto.createHash("md5")'
node:internal/crypto/hash:71
  this[kHandle] = new _Hash(algorithm, xofLen);
                  ^

Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS
    at new Hash (node:internal/crypto/hash:71:19)
    at Object.createHash (node:crypto:130:10)
    at [eval]:1:8
    at Script.runInThisContext (node:vm:129:12)
    at Object.runInThisContext (node:vm:313:38)
    at node:internal/process/execution:79:19
    at [eval]-wrapper:6:22
    at evalScript (node:internal/process/execution:78:60)
    at node:internal/main/eval_string:27:3 {
  library: 'digital envelope routines',
  function: 'EVP_DigestInit_ex',
  reason: 'disabled for FIPS',
  code: 'ERR_OSSL_EVP_DISABLED_FOR_FIPS'
}

Observing the output from the three commands we can see the following:

  • OpenSSL version shows we are using FIPS
  • When we run the node command and try to get FIPS crypto we get a state of 1 which indicates enabled
  • Finally when we try to use node command and create a md5 hash we are told we cannot use it due to FIPS being enabled

This confirms that not only from the Red Hat Device Edge OS perspective but also within MicroShift that FIPS is indeed enabled. Thus ensuring that the standards can technically be configured and deployed on an rpm-ostree type image for Red Hat Device Edge devices.

Wednesday, February 22, 2023

OpenShift to MicroShift Resource Mapping

Recently I was approached with the task of understanding what were the resource differences between OpenShift and MicroShift. This is especially important if one is interested in applying governance policies and rules across a fleet of systems that might be a mix of both OpenShift and MicroShift. Knowing the availability of specific resource definitions that might be common or disparate between the two Kubernetes experience can help an administrator know if they can use the same policy across both instances or if they need to specifically craft a policy for one or the other. Given that this information might be important for administrators I decided to map it out.

Below is a table that shows the resource definition and then if defined in OpenShift 4.12, MicroShift 4.12 or both, the corresponding API version.

Resource OpenShift  Microshift  API Version
alertmanagerconfigs Yes No monitoring.coreos.com/v1alpha1, monitoring.coreos.com/v1beta1
alertmanagers Yes No monitoring.coreos.com/v1
apirequestcounts Yes No apiserver.openshift.io/v1
apiservers Yes No config.openshift.io/v1
apiservices Yes Yes apiregistration.k8s.io/v1
appliedclusterresourcequotas Yes No quota.openshift.io/v1
authentications Yes No config.openshift.io/v1, operator.openshift.io/v1
baremetalhosts Yes No metal3.io/v1alpha1
bmceventsubscriptions Yes No metal3.io/v1alpha1
brokertemplateinstances Yes No template.openshift.io/v1
buildconfigs Yes No build.openshift.io/v1
builds Yes No build.openshift.io/v1, config.openshift.io/v1
catalogsources Yes No operators.coreos.com/v1alpha1
certificatesigningrequests Yes Yes certificates.k8s.io/v1
cloudcredentials Yes No operator.openshift.io/v1
clusterautoscalers Yes No autoscaling.openshift.io/v1
clustercsidrivers Yes No operator.openshift.io/v1
clusteroperators Yes No config.openshift.io/v1
clusterresourcequotas Yes No quota.openshift.io/v1
clusterrolebindings Yes Yes authorization.openshift.io/v1, rbac.authorization.k8s.io/v1
clusterroles Yes Yes authorization.openshift.io/v1, rbac.authorization.k8s.io/v1
clusterserviceversions Yes No operators.coreos.com/v1alpha1
clusterversions Yes No config.openshift.io/v1
componentstatuses Yes Yes v1
configmaps Yes Yes v1
configs Yes No imageregistry.operator.openshift.io/v1, operator.openshift.io/v1, samples.operator.openshift.io/v1
consoleclidownloads Yes No console.openshift.io/v1
consoleexternalloglinks Yes No console.openshift.io/v1
consolelinks Yes No console.openshift.io/v1
consolenotifications Yes No console.openshift.io/v1
consoleplugins Yes No console.openshift.io/v1, console.openshift.io/v1alpha1
consolequickstarts Yes No console.openshift.io/v1
consoles Yes No config.openshift.io/v1, operator.openshift.io/v1
consoleyamlsamples Yes No console.openshift.io/v1
containerruntimeconfigs Yes No machineconfiguration.openshift.io/v1
controllerconfigs Yes No machineconfiguration.openshift.io/v1
controllerrevisions Yes Yes apps/v1
controlplanemachinesets Yes No machine.openshift.io/v1
credentialsrequests Yes No cloudcredential.openshift.io/v1
cronjobs Yes Yes batch/v1
csidrivers Yes Yes storage.k8s.io/v1
csinodes Yes Yes storage.k8s.io/v1
csisnapshotcontrollers Yes No operator.openshift.io/v1
csistoragecapacities Yes Yes storage.k8s.io/v1, storage.k8s.io/v1beta1
customresourcedefinitions Yes Yes apiextensions.k8s.io/v1
daemonsets Yes Yes apps/v1
deploymentconfigs Yes No apps.openshift.io/v1
deployments Yes Yes apps/v1
dnses Yes No config.openshift.io/v1, operator.openshift.io/v1
dnsrecords Yes No ingress.operator.openshift.io/v1
egressfirewalls Yes No k8s.ovn.org/v1
egressips Yes No k8s.ovn.org/v1
egressqoses Yes No k8s.ovn.org/v1
egressrouters Yes No network.operator.openshift.io/v1
endpoints Yes Yes v1
endpointslices Yes Yes discovery.k8s.io/v1
etcds Yes No operator.openshift.io/v1
events Yes Yes v1, events.k8s.io/v1
featuregates Yes No config.openshift.io/v1
firmwareschemas Yes No metal3.io/v1alpha1
flowschemas Yes Yes flowcontrol.apiserver.k8s.io/v1beta1, flowcontrol.apiserver.k8s.io/v1beta2
groups Yes No user.openshift.io/v1
hardwaredata Yes No metal3.io/v1alpha1
helmchartrepositories Yes No helm.openshift.io/v1beta1
horizontalpodautoscalers Yes Yes autoscaling/v1, autoscaling/v2, autoscaling/v2beta2
hostfirmwaresettings Yes No metal3.io/v1alpha1
identities Yes No user.openshift.io/v1
imagecontentpolicies Yes No config.openshift.io/v1
imagecontentsourcepolicies Yes No operator.openshift.io/v1alpha1
imagepruners Yes No imageregistry.operator.openshift.io/v1
images Yes No config.openshift.io/v1, image.openshift.io/v1
imagesignatures Yes No image.openshift.io/v1
imagestreams Yes No image.openshift.io/v1
imagestreamtags Yes No image.openshift.io/v1
imagetags Yes No image.openshift.io/v1
infrastructures Yes No config.openshift.io/v1
ingressclasses Yes Yes networking.k8s.io/v1
ingresscontrollers Yes No operator.openshift.io/v1
ingresses Yes Yes config.openshift.io/v1, networking.k8s.io/v1
insightsoperators Yes No operator.openshift.io/v1
installplans Yes No operators.coreos.com/v1alpha1
ippools Yes No whereabouts.cni.cncf.io/v1alpha1
jobs Yes Yes batch/v1
kubeapiservers Yes No operator.openshift.io/v1
kubecontrollermanagers Yes No operator.openshift.io/v1
kubeletconfigs Yes No machineconfiguration.openshift.io/v1
kubeschedulers Yes No operator.openshift.io/v1
kubestorageversionmigrators Yes No operator.openshift.io/v1
leases Yes Yes coordination.k8s.io/v1
limitranges Yes Yes v1
machineautoscalers Yes No autoscaling.openshift.io/v1beta1
machineconfigpools Yes No machineconfiguration.openshift.io/v1
machineconfigs Yes No machineconfiguration.openshift.io/v1
machinehealthchecks Yes No machine.openshift.io/v1beta1
machines Yes No machine.openshift.io/v1beta1
machinesets Yes No machine.openshift.io/v1beta1
mutatingwebhookconfigurations Yes Yes admissionregistration.k8s.io/v1
namespaces Yes Yes v1
network-attachment-definitions Yes No k8s.cni.cncf.io/v1
networkpolicies Yes Yes networking.k8s.io/v1
networks Yes No config.openshift.io/v1, operator.openshift.io/v1
nodes Yes Yes v1, config.openshift.io/v1, metrics.k8s.io/v1beta1
oauthaccesstokens Yes No oauth.openshift.io/v1
oauthauthorizetokens Yes No oauth.openshift.io/v1
oauthclientauthorizations Yes No oauth.openshift.io/v1
oauthclients Yes No oauth.openshift.io/v1
oauths Yes No config.openshift.io/v1
olmconfigs Yes No operators.coreos.com/v1
openshiftapiservers Yes No operator.openshift.io/v1
openshiftcontrollermanagers Yes No operator.openshift.io/v1
operatorconditions Yes No operators.coreos.com/v1, operators.coreos.com/v2
operatorgroups Yes No operators.coreos.com/v1, operators.coreos.com/v1alpha2
operatorhubs Yes No config.openshift.io/v1
operatorpkis Yes No network.operator.openshift.io/v1
operators Yes No operators.coreos.com/v1
overlappingrangeipreservations Yes No whereabouts.cni.cncf.io/v1alpha1
packagemanifests Yes No packages.operators.coreos.com/v1
performanceprofiles Yes No performance.openshift.io/v1, performance.openshift.io/v1alpha1, performance.openshift.io/v2
persistentvolumeclaims Yes Yes v1
persistentvolumes Yes Yes v1
poddisruptionbudgets Yes Yes policy/v1
podmonitors Yes No monitoring.coreos.com/v1
podnetworkconnectivitychecks Yes No controlplane.operator.openshift.io/v1alpha1
pods Yes Yes v1, metrics.k8s.io/v1beta1
podtemplates Yes Yes v1
preprovisioningimages Yes No metal3.io/v1alpha1
priorityclasses Yes Yes scheduling.k8s.io/v1
prioritylevelconfigurations Yes Yes flowcontrol.apiserver.k8s.io/v1beta1, flowcontrol.apiserver.k8s.io/v1beta2
probes Yes No monitoring.coreos.com/v1
profiles Yes No tuned.openshift.io/v1
projecthelmchartrepositories Yes No helm.openshift.io/v1beta1
projectrequests Yes No project.openshift.io/v1
projects Yes No config.openshift.io/v1, project.openshift.io/v1
prometheuses Yes No monitoring.coreos.com/v1
prometheusrules Yes No monitoring.coreos.com/v1
provisionings Yes No metal3.io/v1alpha1
proxies Yes No config.openshift.io/v1
rangeallocations Yes Yes security.internal.openshift.io/v1, security.openshift.io/v1
replicasets Yes Yes apps/v1
replicationcontrollers Yes Yes v1
resourceaccessreviews Yes No authorization.openshift.io/v1
resourcequotas Yes Yes v1
rolebindingrestrictions Yes No authorization.openshift.io/v1
rolebindings Yes Yes authorization.openshift.io/v1, rbac.authorization.k8s.io/v1
roles Yes Yes authorization.openshift.io/v1, rbac.authorization.k8s.io/v1
routes Yes Yes route.openshift.io/v1
runtimeclasses Yes Yes node.k8s.io/v1
schedulers Yes No config.openshift.io/v1
secrets Yes Yes v1
securitycontextconstraints Yes Yes security.openshift.io/v1
selfsubjectaccessreviews Yes Yes authorization.k8s.io/v1
selfsubjectrulesreviews Yes Yes authorization.k8s.io/v1
serviceaccounts Yes Yes v1
servicecas Yes No operator.openshift.io/v1
servicemonitors Yes No monitoring.coreos.com/v1
services Yes Yes v1
statefulsets Yes Yes apps/v1
storageclasses Yes Yes storage.k8s.io/v1
storages Yes No operator.openshift.io/v1
storagestates Yes No migration.k8s.io/v1alpha1
storageversionmigrations Yes No migration.k8s.io/v1alpha1
subjectaccessreviews Yes Yes authorization.k8s.io/v1, authorization.openshift.io/v1
subscriptions Yes No operators.coreos.com/v1alpha1
templateinstances Yes No template.openshift.io/v1
templates Yes No template.openshift.io/v1
thanosrulers Yes No monitoring.coreos.com/v1
tokenreviews Yes Yes authentication.k8s.io/v1, oauth.openshift.io/v1
tuneds Yes No tuned.openshift.io/v1
useridentitymappings Yes No user.openshift.io/v1
useroauthaccesstokens Yes No oauth.openshift.io/v1
users Yes No user.openshift.io/v1
validatingwebhookconfigurations Yes Yes admissionregistration.k8s.io/v1
volumeattachments Yes Yes storage.k8s.io/v1
volumesnapshotclasses Yes No snapshot.storage.k8s.io/v1
volumesnapshotcontents Yes No snapshot.storage.k8s.io/v1
volumesnapshots Yes No snapshot.storage.k8s.io/v1