Thursday, December 02, 2021

The Lowdown on Downward API in OpenShift

 


A customer approached me recently with a use case where they needed to have the OpenShift container know the hostname of the node it was running on.  They had found that the normal hostname file on Red Hat CoreOS was not on the node so they were not certain how they could derive the hostname value when they launched the custom daemonset they built.  Enter the downward API in OpenShift.

The downward API is a implementation that allows containers to consume information about API objects without integrating via the OpenShift API. Such information includes items like the pod’s name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume file.

Lets go ahead and demonstrate the capabilities of the downward API with a simple example of how it can be used.  First lets create the following downward-secret.yaml file which will be used in our demonstration.  The secret file is just a basic secret nothing exciting:

$ cat << EOF > downward-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: downwardsecret
data:
  password: cGFzc3dvcmQ=
  username: ZGV2ZWxvcGVy
type: kubernetes.io/basic-auth
EOF

Now lets create the secret on the OpenShift cluster:

$ oc create -f downward-secret.yaml
secret/downwardsecret created

Next lets create the following downward-pod.yaml file:

$ cat << EOF > downward-pod.yaml
apiVersion: v1
kind: Pod
metadata:
  name: downward-pod
spec:
  containers:
    - name: busybox-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv NODENAME HOSTIP SERVICEACCT NAMESPACE;
          printenv DOWNWARD_SECRET;
          sleep 10;
        done;
      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      volumeMounts:
        - name: downwardinfo
          mountPath: /etc/downwardinfo
          readOnly: false
          
      env:
        - name: NODENAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: HOSTIP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: SERVICEACCT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: DOWNWARD_SECRET
          valueFrom:
            secretKeyRef:
              name: downwardsecret
              key: username
  volumes:
    - name: downwardinfo
      downwardAPI:
        items:
          - path: "cpu_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.cpu
          - path: "cpu_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.cpu
          - path: "mem_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.memory
          - path: "mem_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.memory
EOF

Lets quickly take a look at the contents of that file which will create a pod called downward-pod and inside run a container called busybox-container using the busybox image:

$ cat downward-pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: downward-pod
spec:
  containers:
    - name: busybox-container
      image: k8s.gcr.io/busybox
      command: [ "sh", "-c"]
      args:
      - while true; do
          echo -en '\n';
          printenv NODENAME HOSTIP SERVICEACCT NAMESPACE;
          printenv DOWNWARD_SECRET;
          sleep 10;
        done;
      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      volumeMounts:
        - name: downwardinfo
          mountPath: /etc/downwardinfo
          readOnly: false
          
      env:
        - name: NODENAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: HOSTIP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: SERVICEACCT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: DOWNWARD_SECRET
          valueFrom:
            secretKeyRef:
              name: downwardsecret
              key: username
  volumes:
    - name: downwardinfo
      downwardAPI:
        items:
          - path: "cpu_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.cpu
          - path: "cpu_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.cpu
          - path: "mem_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.memory
          - path: "mem_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.memory


Under the container section we also defined some resources and added a volume mount. The volume mount will be used to mount up our downward api volume files which will consist of the resources we defined.  Those files will get mounted under the path /etc/downwardinfo inside the container:

      resources:
        requests:
          memory: "32Mi"
          cpu: "125m"
        limits:
          memory: "64Mi"
          cpu: "250m"
      volumeMounts:
        - name: downwardinfo
          mountPath: /etc/downwardinfo
          readOnly: false

Next there is a section where we defined some environment variables that reference some additional downward API values.  There is also a variable that references the downwardsecret.  All of these variables will get passed into the container to be consumed by whatever processes require them:

        env:
        - name: NODENAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: HOSTIP
          valueFrom:
            fieldRef:
              fieldPath: status.hostIP
        - name: SERVICEACCT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
        - name: NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: DOWNWARD_SECRET
          valueFrom:
            secretKeyRef:
              name: downwardsecret
              key: username

And finally there is a volumes section which defines the filename and the resource value field for the downwardinfo files that we want to pass into the container:

  volumes:
    - name: downwardinfo
      downwardAPI:
        items:
          - path: "cpu_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.cpu
          - path: "cpu_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.cpu
          - path: "mem_limit"
            resourceFieldRef:
              containerName: busybox-container
              resource: limits.memory
          - path: "mem_request"
            resourceFieldRef:
              containerName: busybox-container
              resource: requests.memory


Now that we have an idea of what the downward-pod.yaml does lets go ahead and run the pod:

$ oc create -f downward-pod.yaml 
pod/downward-pod created
$ oc get pod
NAME           READY   STATUS    RESTARTS   AGE
downward-pod   1/1     Running   0          6s

With the pod running we can now validate that the downward API variables and volume files we set.  First lets just look at the pod log and see if the variables we defined and printed in our argument loop show the right values:

$ oc logs downward-pod

master-0.kni20.schmaustech.com
192.168.0.210
default
default
developer

master-0.kni20.schmaustech.com
192.168.0.210
default
default
developer


The variables look to be populated correctly with the right hostname, host IP address, namespace and serviceaccount.   Even the username for our secret is showing up correctly as developer.   Since that looks correct lets move on and execute a shell in the pod:

$ oc exec -it downward-pod sh
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl exec [POD] -- [COMMAND] instead.
/ # 

Once inside lets print the environment out and see if our variables are listed there as well:

/ # printenv
KUBERNETES_PORT=tcp://172.30.0.1:443
KUBERNETES_SERVICE_PORT=443
HOSTNAME=downward-pod
SHLVL=1
HOME=/root
TERM=xterm
KUBERNETES_PORT_443_TCP_ADDR=172.30.0.1
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
KUBERNETES_PORT_443_TCP_PORT=443
KUBERNETES_PORT_443_TCP_PROTO=tcp
HOSTIP=192.168.0.210
DOWNWARD_SECRET=developer
NAMESPACE=default
KUBERNETES_PORT_443_TCP=tcp://172.30.0.1:443
KUBERNETES_SERVICE_PORT_HTTPS=443
PWD=/
KUBERNETES_SERVICE_HOST=172.30.0.1
SERVICEACCT=default
NSS_SDB_USE_CACHE=no
NODENAME=master-0.kni20.schmaustech.com

Again the environment variables we defined are showing up and could be consumed by a process within the container. 

Now lets explore our volume files and confirm they too were set.   We can see the /etc/downwardinfo directory and four files exist:

/ # ls /etc/downwardinfo
cpu_limit    cpu_request  mem_limit    mem_request

Lets look at the contents of the four files:

/ # echo "$(cat /etc/downwardinfo/cpu_limit)"
1
/ # echo "$(cat /etc/downwardinfo/cpu_request)"
1
/ # echo "$(cat /etc/downwardinfo/mem_limit)"
67108864
/ # echo "$(cat /etc/downwardinfo/mem_request)"
33554432


The values in the files look correct and correspond to the resource values we defined in the downward-pod.yaml file that launched this pod.

At this point we have validated that the downward API does indeed provide information into the pod and can present itself either as an environment variable for a volume file.  So if anyone every asks how to get the hostname of the node the pod is running on as an environment variable inside the pod just keep the downward API in mind.