Wednesday, May 10, 2023

Experimental FIPS on Red Hat Device Edge with MicroShift

Federal agencies purchasing cryptographic-based security systems must confirm an associated FIPS 140-2 or FIPS 140-3 certificate exists. This procurement “check-box” can definitely be a deal breaker. The claim of “designed for FIPS” or “FIPS ready” are not sufficient to pass this requirment. If FIPS certification does not exist it will often mean there will be no sale with the vendor of choice. Many commercial and private organizations will also perceive a product as having an advantage when paired with the FIPS certification. From a Red Hat Exterprise Linux perspective whether a product is FIPS certified or not can be found in the following knowledge base article. The rest of this blog will detail how to technically install and show that FIPS is in use with a Red Hat Device Edge 9.2 rpm-ostree image that contains MicroShift 4.13.  Bear in mind as indicated in the above knowledge base article Red Hat Enterprise Linux 9.2 still does need to go through the proper 140-3 FIPS certification process.  What is provided here is merely to demonstrate forward looking thinking when FIPS compliance is achieved.

To get familiar with Red Hat Device Edge and MicroShift please reference the following blog post. We will use the same steps to build and produce the images for our FIPS experiment from that previous blog. The only difference will be that instead of using Red Hat Device Edge 8.7 and MicroShift 4.12 we are using the newer releases of those components. Keep in mind though that for builingd the rpm-ostree image we need to reposync down the updated Fast Data Path and Red Hat OpenShift repositories for the new versions. Also for packing the rpm-ostree into the boot.iso with the recook script we have to update the references of 8.7 to 9.2 which include boot.iso name and disk labels. Otherwise everything else can use the same logical steps.

FIPS mode on Red Hat Device Edge begins just like with Red Hat Enterprise Linux at install time when we boot the media that will allow us to install the system. In our case we will take the rhde-ztp.iso image we created in the blog referenced above and use that as our deployment image on our device. When we power on the device however we need to edit the kernel boot arguments and add the following fips=1 argument at the end.

linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64:/ks.cfg fips=1

We could also do this permanently in our grub.cfg file we create in the blog referenced above before we generate the rhde-ztp.iso.

set default="1"

function load_video {
  insmod efi_gop
  insmod efi_uga
  insmod video_bochs
  insmod video_cirrus
  insmod all_video
}

load_video
set gfxpayload=keep
insmod gzio
insmod part_gpt
insmod ext2

set timeout=60
### END /etc/grub.d/00_header ###

search --no-floppy --set=root -l 'RHEL-9-2-0-BaseOS-x86_64'

### BEGIN /etc/grub.d/10_linux ###
menuentry 'Install Red Hat Enterprise Linux 9.2' --class fedora --class gnu-linux --class gnu --class os {
	linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.ks=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64:/ks.cfg fips=1
	initrdefi /images/pxeboot/initrd.img
}
menuentry 'Test this media & install Red Hat Enterprise Linux 9.2' --class fedora --class gnu-linux --class gnu --class os {
	linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 rd.live.check fips=1
	initrdefi /images/pxeboot/initrd.img
}
submenu 'Troubleshooting -->' {
	menuentry 'Install Red Hat Enterprise Linux 9.2 in text mode' --class fedora --class gnu-linux --class gnu --class os {
		linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.text quiet
		initrdefi /images/pxeboot/initrd.img
	}
	menuentry 'Rescue a Red Hat Enterprise Linux system' --class fedora --class gnu-linux --class gnu --class os {
		linuxefi /images/pxeboot/vmlinuz inst.stage2=hd:LABEL=RHEL-9-2-0-BaseOS-x86_64 inst.rescue quiet
		initrdefi /images/pxeboot/initrd.img
	}
}

Once the system boots from the rhde-ztp.iso with the FIPS argument the installation will proceed using the appropriate FIPS cryptography libraries and the rpm-ostree image will be applied to the device. Once the system reboots, MicroShift will start automatically and we can begin the process of verifying that indeed FIPS mode is enabled not only at the Red Hat Device Edge layer but also within a container that is deployed on MicroShift.

Now let's begin validating that FIPS is indeed running on the edge device. First lets login to the host and check from the Red Hat Device Edge perspective:

# cat /etc/redhat-release
Red Hat Enterprise Linux release 9.2 (Plow)
# fips-mode-setup --check
FIPS mode is enabled.

From the OS level FIPS is indeed enabled. Now let's also look at the openssl libraries installed on the edge device:

# rpm -qa|grep openssl
openssl-libs-3.0.7-6.el9_2.x86_64
openssl-3.0.7-6.el9_2.x86_64
xmlsec1-openssl-1.2.29-9.el9.x86_64

Next let's confirm MicroShift is up and running on the FIPS enabled edge device.

# export KUBECONFIG=/var/lib/microshift/resources/kubeadmin/kubeconfig
# oc get pods -A
NAMESPACE                  NAME                                 READY   STATUS    RESTARTS      AGE
openshift-dns              dns-default-pmncq                    2/2     Running   0             13h
openshift-dns              node-resolver-qh67d                  1/1     Running   0             13h
openshift-ingress          router-default-6857569799-njdsx      1/1     Running   0             13h
openshift-ovn-kubernetes   ovnkube-master-s5dpl                 4/4     Running   0             13h
openshift-ovn-kubernetes   ovnkube-node-m2bv6                   1/1     Running   1 (13h ago)   13h
openshift-service-ca       service-ca-7f49b8c7f5-rbsgf          1/1     Running   0             13h
openshift-storage          topolvm-controller-f58fcd7cb-6sggd   4/4     Running   0             13h
openshift-storage          topolvm-node-ddkhj                   4/4     Running   0             13h

Next let's create a simple deployment yaml referencing the nodejs minimal image from Red Hat:

# cat << EOF > ~/node-ubi.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: simple-deployment
  namespace: simple
spec:
  replicas: 1
  selector:
    matchLabels:
      app: simple-deployment
    type: Recreate
  template:
    metadata:
      labels:
        app: simple-deployment
        deploymentconfig: simple-deployment
    spec:
      securityContext:
        seccompProfile:
          type: RuntimeDefault
      containers:
      - image: registry.access.redhat.com/ubi8/nodejs-16-minimal
        imagePullPolicy: Always
        name: simple-deployment
        command:
        - /bin/sh
        - -c
        - |
          sleep infinity
        resources: {}
EOF

Now we can create a namespace and then deploy the simple deployment yaml we created onto MicroShift:

# oc create namespace simple
namespace/simple created
# oc create -f node-ubi.yaml 
deployment.apps/simple-deployment created

Let's verify our simple deployment nodejs pod is running:

# oc get pods -A
NAMESPACE                  NAME                                 READY   STATUS    RESTARTS      AGE
openshift-dns              dns-default-pmncq                    2/2     Running   0             13h
openshift-dns              node-resolver-qh67d                  1/1     Running   0             13h
openshift-ingress          router-default-6857569799-njdsx      1/1     Running   0             13h
openshift-ovn-kubernetes   ovnkube-master-s5dpl                 4/4     Running   0             13h
openshift-ovn-kubernetes   ovnkube-node-m2bv6                   1/1     Running   1 (13h ago)   13h
openshift-service-ca       service-ca-7f49b8c7f5-rbsgf          1/1     Running   0             13h
openshift-storage          topolvm-controller-f58fcd7cb-6sggd   4/4     Running   0             13h
openshift-storage          topolvm-node-ddkhj                   4/4     Running   0             13h
simple                     simple-deployment-66b9457cb9-v22vj   1/1     Running   0             54s

With the pod running we now need to get into a bash prompt in the simple deployment nodejs pod:

# oc exec -it simple-deployment-66b9457cb9-v22vj -n simple /bin/bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
bash-4.4$ 

From the command prompt in the pod let's first check the openssl libraries:

bash-4.4$ rpm -qa|grep openssl
openssl-libs-1.1.1k-9.el8_7.x86_64
openssl-1.1.1k-9.el8_7.x86_64

Now let's run a few commands to show that FIPS is indeed enabled in the OpenSSL libraries:

bash-4.4$ openssl version
OpenSSL 1.1.1k  FIPS 25 Mar 2021
bash-4.4$ node -p 'crypto.getFips()'
1
bash-4.4$ node -p 'crypto.createHash("md5")'
node:internal/crypto/hash:71
  this[kHandle] = new _Hash(algorithm, xofLen);
                  ^

Error: error:060800C8:digital envelope routines:EVP_DigestInit_ex:disabled for FIPS
    at new Hash (node:internal/crypto/hash:71:19)
    at Object.createHash (node:crypto:130:10)
    at [eval]:1:8
    at Script.runInThisContext (node:vm:129:12)
    at Object.runInThisContext (node:vm:313:38)
    at node:internal/process/execution:79:19
    at [eval]-wrapper:6:22
    at evalScript (node:internal/process/execution:78:60)
    at node:internal/main/eval_string:27:3 {
  library: 'digital envelope routines',
  function: 'EVP_DigestInit_ex',
  reason: 'disabled for FIPS',
  code: 'ERR_OSSL_EVP_DISABLED_FOR_FIPS'
}

Observing the output from the three commands we can see the following:

  • OpenSSL version shows we are using FIPS
  • When we run the node command and try to get FIPS crypto we get a state of 1 which indicates enabled
  • Finally when we try to use node command and create a md5 hash we are told we cannot use it due to FIPS being enabled

This confirms that not only from the Red Hat Device Edge OS perspective but also within MicroShift that FIPS is indeed enabled. Thus ensuring that the standards can technically be configured and deployed on an rpm-ostree type image for Red Hat Device Edge devices.