Showing posts with label redhat advanced cluster management. Show all posts
Showing posts with label redhat advanced cluster management. Show all posts

Saturday, May 21, 2022

Check For Expired Certificates on OpenShift


OpenShift has a lot of certificates associated to the services it runs.  With that in mind it might make sense to check on those certificates every once and a while with some kind of simple report.   I have had customers make this request on occasion and it got me thinking about a quick and dirty way to visualize this.   The following blog show the fruits of this simple task in a simple bash script.

First lets go ahead and create the certs-expired.sh script: 

$ cat << EOF > ~/certs-expired.sh
#!/bin/bash

format="%-8s%-8s%-60s%-26s%-60s\n"
printf "$format" STATE DAYS NAME EXPIRY NAMESPACE
printf "$format" ----- ---- ---- ------ ---------

oc get secrets -A -o go-template='{{range .items}}{{if eq .type "kubernetes.io/tls"}}{{.metadata.namespace}}{{" "}}{{.metadata.name}}{{" "}}{{index .data "tls.crt"}}{{"\n"}}{{end}}{{end}}' | while read namespace name cert
do
  certdate=`echo $cert | base64 -d | openssl x509 -noout -enddate|cut -d= -f2`
  epochcertdate=$(date -d "$certdate" +"%s")
  currentdate=$(date +%s)
  if ((epochcertdate > currentdate)); then
    datediff=$((epochcertdate-currentdate))
    state="OK"
  else
    state="EXPIRED"
    datediff=$((currentdate-epochcertdate))
  fi
  days=$((datediff/86400))
  certdate=`echo $cert | base64 -d | openssl x509 -noout -enddate| cut -d= -f2`
  printf "$format" "$state" "$days" "$name" "$certdate" "$namespace" 
done

EOF

The script makes the assumptions that the oc binary is in the current path variable and that the kubeconfig has been set.   This ensures that the oc command inside the script can pull the appropriate data.   If those assumptions are met we can go ahead and run the script.  I chose just to issue a bash against the script but we could have also set the file with the execute permissions.   When we execute the script we can see the output below:

$ bash certs-expired.sh 
STATE   DAYS    NAME                                                        EXPIRY                    NAMESPACE                                                   
-----   ----    ----                                                        ------                    ---------                                                   
OK      715     openshift-apiserver-operator-serving-cert                   May  5 21:33:47 2024 GMT  openshift-apiserver-operator                                
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-apiserver                                         
OK      715     serving-cert                                                May  5 21:33:52 2024 GMT  openshift-apiserver                                         
OK      715     serving-cert                                                May  5 21:33:59 2024 GMT  openshift-authentication-operator                           
OK      715     v4-0-config-system-serving-cert                             May  5 21:33:49 2024 GMT  openshift-authentication                                    
OK      715     cloud-credential-operator-serving-cert                      May  5 21:33:50 2024 GMT  openshift-cloud-credential-operator                         
OK      715     machine-approver-tls                                        May  5 21:33:48 2024 GMT  openshift-cluster-machine-approver                          
OK      715     node-tuning-operator-tls                                    May  5 21:33:47 2024 GMT  openshift-cluster-node-tuning-operator                      
OK      715     samples-operator-tls                                        May  5 21:37:44 2024 GMT  openshift-cluster-samples-operator                          
OK      715     cluster-storage-operator-serving-cert                       May  5 21:33:55 2024 GMT  openshift-cluster-storage-operator                          
OK      715     csi-snapshot-webhook-secret                                 May  5 21:33:47 2024 GMT  openshift-cluster-storage-operator                          
OK      715     serving-cert                                                May  5 21:33:54 2024 GMT  openshift-cluster-storage-operator                          
OK      715     cluster-version-operator-serving-cert                       May  5 21:33:52 2024 GMT  openshift-cluster-version                                   
OK      15      kube-controller-manager-client-cert-key                     Jun  5 21:33:41 2022 GMT  openshift-config-managed                                    
OK      15      kube-scheduler-client-cert-key                              Jun  5 21:33:34 2022 GMT  openshift-config-managed                                    
OK      715     config-operator-serving-cert                                May  5 21:33:47 2024 GMT  openshift-config-operator                                   
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-config                                            
OK      3635    etcd-metric-client                                          May  3 21:13:54 2032 GMT  openshift-config                                            
OK      3635    etcd-metric-signer                                          May  3 21:13:54 2032 GMT  openshift-config                                            
OK      3635    etcd-signer                                                 May  3 21:13:54 2032 GMT  openshift-config                                            
OK      715     serving-cert                                                May  5 21:41:37 2024 GMT  openshift-console-operator                                  
OK      715     console-serving-cert                                        May  5 21:42:15 2024 GMT  openshift-console                                           
OK      715     openshift-controller-manager-operator-serving-cert          May  5 21:33:47 2024 GMT  openshift-controller-manager-operator                       
OK      715     serving-cert                                                May  5 21:33:56 2024 GMT  openshift-controller-manager                                
OK      715     metrics-tls                                                 May  5 21:33:58 2024 GMT  openshift-dns-operator                                      
OK      715     dns-default-metrics-tls                                     May  5 21:34:59 2024 GMT  openshift-dns                                               
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-etcd-operator                                     
OK      715     etcd-operator-serving-cert                                  May  5 21:33:57 2024 GMT  openshift-etcd-operator                                     
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-etcd                                              
OK      1080    etcd-peer-asus-vm1.kni.schmaustech.com                      May  5 21:51:28 2025 GMT  openshift-etcd                                              
OK      1080    etcd-peer-asus1-vm2.kni.schmaustech.com                     May  5 21:33:23 2025 GMT  openshift-etcd                                              
OK      1080    etcd-peer-asus1-vm3.kni.schmaustech.com                     May  5 21:33:24 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-asus-vm1.kni.schmaustech.com                   May  5 21:51:28 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-asus1-vm2.kni.schmaustech.com                  May  5 21:33:23 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-asus1-vm3.kni.schmaustech.com                  May  5 21:33:24 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-metrics-asus-vm1.kni.schmaustech.com           May  5 21:51:27 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-metrics-asus1-vm2.kni.schmaustech.com          May  5 21:33:23 2025 GMT  openshift-etcd                                              
OK      1080    etcd-serving-metrics-asus1-vm3.kni.schmaustech.com          May  5 21:33:24 2025 GMT  openshift-etcd                                              
OK      715     serving-cert                                                May  5 21:33:59 2024 GMT  openshift-etcd                                              
OK      715     image-registry-operator-tls                                 May  5 21:33:58 2024 GMT  openshift-image-registry                                    
OK      715     metrics-tls                                                 May  5 21:33:55 2024 GMT  openshift-ingress-operator                                  
OK      715     router-ca                                                   May  5 21:35:59 2024 GMT  openshift-ingress-operator                                  
OK      715     router-certs-default                                        May  5 21:36:01 2024 GMT  openshift-ingress                                           
OK      715     router-metrics-certs-default                                May  5 21:36:00 2024 GMT  openshift-ingress                                           
OK      715     openshift-insights-serving-cert                             May  5 21:33:51 2024 GMT  openshift-insights                                          
OK      15      aggregator-client-signer                                    Jun  6 16:21:59 2022 GMT  openshift-kube-apiserver-operator                           
OK      715     kube-apiserver-operator-serving-cert                        May  5 21:33:54 2024 GMT  openshift-kube-apiserver-operator                           
OK      350     kube-apiserver-to-kubelet-signer                            May  6 21:09:57 2023 GMT  openshift-kube-apiserver-operator                           
OK      350     kube-control-plane-signer                                   May  6 21:09:57 2023 GMT  openshift-kube-apiserver-operator                           
OK      3635    loadbalancer-serving-signer                                 May  3 21:09:52 2032 GMT  openshift-kube-apiserver-operator                           
OK      3635    localhost-recovery-serving-signer                           May  3 21:33:29 2032 GMT  openshift-kube-apiserver-operator                           
OK      3635    localhost-serving-signer                                    May  3 21:09:50 2032 GMT  openshift-kube-apiserver-operator                           
OK      105     node-system-admin-client                                    Sep  3 21:33:40 2022 GMT  openshift-kube-apiserver-operator                           
OK      350     node-system-admin-signer                                    May  6 21:33:29 2023 GMT  openshift-kube-apiserver-operator                           
OK      3635    service-network-serving-signer                              May  3 21:09:51 2032 GMT  openshift-kube-apiserver-operator                           
OK      15      aggregator-client                                           Jun  6 16:21:59 2022 GMT  openshift-kube-apiserver                                    
OK      15      check-endpoints-client-cert-key                             Jun  5 21:33:46 2022 GMT  openshift-kube-apiserver                                    
OK      15      control-plane-node-admin-client-cert-key                    Jun  5 21:33:53 2022 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client-10                                              May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client-11                                              May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client-12                                              May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client-8                                               May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      3635    etcd-client-9                                               May  3 21:13:54 2032 GMT  openshift-kube-apiserver                                    
OK      15      external-loadbalancer-serving-certkey                       Jun  5 21:33:52 2022 GMT  openshift-kube-apiserver                                    
OK      15      internal-loadbalancer-serving-certkey                       Jun  5 21:33:34 2022 GMT  openshift-kube-apiserver                                    
OK      15      kubelet-client                                              Jun  5 21:33:34 2022 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey                          May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey-10                       May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey-11                       May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey-12                       May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey-8                        May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      3635    localhost-recovery-serving-certkey-9                        May  3 21:33:29 2032 GMT  openshift-kube-apiserver                                    
OK      15      localhost-serving-cert-certkey                              Jun  5 21:33:34 2022 GMT  openshift-kube-apiserver                                    
OK      15      service-network-serving-certkey                             Jun  5 21:33:33 2022 GMT  openshift-kube-apiserver                                    
OK      15      csr-signer                                                  Jun  6 16:26:40 2022 GMT  openshift-kube-controller-manager-operator                  
OK      45      csr-signer-signer                                           Jul  6 16:22:14 2022 GMT  openshift-kube-controller-manager-operator                  
OK      715     kube-controller-manager-operator-serving-cert               May  5 21:33:57 2024 GMT  openshift-kube-controller-manager-operator                  
OK      15      csr-signer                                                  Jun  6 16:26:40 2022 GMT  openshift-kube-controller-manager                           
OK      15      kube-controller-manager-client-cert-key                     Jun  5 21:33:41 2022 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert                                                May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-2                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-3                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-4                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-5                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-6                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     serving-cert-7                                              May  5 21:33:51 2024 GMT  openshift-kube-controller-manager                           
OK      715     kube-scheduler-operator-serving-cert                        May  5 21:33:50 2024 GMT  openshift-kube-scheduler-operator                           
OK      15      kube-scheduler-client-cert-key                              Jun  5 21:33:34 2022 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert                                                May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert-3                                              May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert-4                                              May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert-5                                              May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert-6                                              May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert-7                                              May  5 21:33:59 2024 GMT  openshift-kube-scheduler                                    
OK      715     serving-cert                                                May  5 21:34:00 2024 GMT  openshift-kube-storage-version-migrator-operator            
OK      725     diskmaker-metric-serving-cert                               May 15 23:33:46 2024 GMT  openshift-local-storage                                     
OK      715     baremetal-operator-webhook-server-cert                      May  5 21:36:34 2024 GMT  openshift-machine-api                                       
OK      715     cluster-autoscaler-operator-cert                            May  5 21:34:01 2024 GMT  openshift-machine-api                                       
OK      715     cluster-baremetal-operator-tls                              May  5 21:33:58 2024 GMT  openshift-machine-api                                       
OK      715     cluster-baremetal-webhook-server-cert                       May  5 21:33:48 2024 GMT  openshift-machine-api                                       
OK      715     machine-api-controllers-tls                                 May  5 21:33:47 2024 GMT  openshift-machine-api                                       
OK      715     machine-api-operator-tls                                    May  5 21:33:56 2024 GMT  openshift-machine-api                                       
OK      715     machine-api-operator-webhook-cert                           May  5 21:33:53 2024 GMT  openshift-machine-api                                       
OK      715     proxy-tls                                                   May  5 21:34:00 2024 GMT  openshift-machine-config-operator                           
OK      715     marketplace-operator-metrics                                May  5 21:33:50 2024 GMT  openshift-marketplace                                       
OK      715     alertmanager-main-tls                                       May  5 21:45:20 2024 GMT  openshift-monitoring                                        
OK      715     cluster-monitoring-operator-tls                             May  5 21:33:52 2024 GMT  openshift-monitoring                                        
OK      715     grafana-tls                                                 May  5 21:45:20 2024 GMT  openshift-monitoring                                        
OK      715     kube-state-metrics-tls                                      May  5 21:35:59 2024 GMT  openshift-monitoring                                        
OK      715     node-exporter-tls                                           May  5 21:35:59 2024 GMT  openshift-monitoring                                        
OK      715     openshift-state-metrics-tls                                 May  5 21:35:58 2024 GMT  openshift-monitoring                                        
OK      715     prometheus-adapter-tls                                      May  5 21:35:59 2024 GMT  openshift-monitoring                                        
OK      715     prometheus-k8s-thanos-sidecar-tls                           May  5 21:45:22 2024 GMT  openshift-monitoring                                        
OK      715     prometheus-k8s-tls                                          May  5 21:45:21 2024 GMT  openshift-monitoring                                        
OK      715     prometheus-operator-tls                                     May  5 21:35:43 2024 GMT  openshift-monitoring                                        
OK      715     telemeter-client-tls                                        May  5 21:37:44 2024 GMT  openshift-monitoring                                        
OK      715     thanos-querier-tls                                          May  5 21:35:58 2024 GMT  openshift-monitoring                                        
OK      715     metrics-daemon-secret                                       May  5 21:33:56 2024 GMT  openshift-multus                                            
OK      715     multus-admission-controller-secret                          May  5 21:33:48 2024 GMT  openshift-multus                                            
OK      3635    etcd-client                                                 May  3 21:13:54 2032 GMT  openshift-oauth-apiserver                                   
OK      715     serving-cert                                                May  5 21:34:01 2024 GMT  openshift-oauth-apiserver                                   
OK      715     catalog-operator-serving-cert                               May  5 21:33:47 2024 GMT  openshift-operator-lifecycle-manager                        
OK      715     olm-operator-serving-cert                                   May  5 21:33:48 2024 GMT  openshift-operator-lifecycle-manager                        
OK      714     packageserver-service-cert                                  May  4 21:34:44 2024 GMT  openshift-operator-lifecycle-manager                        
OK      0       pprof-cert                                                  May 21 18:30:03 2022 GMT  openshift-operator-lifecycle-manager                        
OK      3635    ovn-ca                                                      May  3 21:27:45 2032 GMT  openshift-ovn-kubernetes                                    
OK      167     ovn-cert                                                    Nov  5 09:27:45 2022 GMT  openshift-ovn-kubernetes                                    
OK      715     ovn-master-metrics-cert                                     May  5 21:33:53 2024 GMT  openshift-ovn-kubernetes                                    
OK      715     ovn-node-metrics-cert                                       May  5 21:33:49 2024 GMT  openshift-ovn-kubernetes                                    
OK      3635    signer-ca                                                   May  3 21:27:46 2032 GMT  openshift-ovn-kubernetes                                    
OK      167     signer-cert                                                 Nov  5 09:27:46 2022 GMT  openshift-ovn-kubernetes                                    
OK      715     serving-cert                                                May  5 21:33:54 2024 GMT  openshift-service-ca-operator                               
OK      775     signing-key                                                 Jul  4 21:33:37 2024 GMT  openshift-service-ca                                        
OK      725     noobaa-db-serving-cert                                      May 15 23:42:26 2024 GMT  openshift-storage                                           
OK      725     noobaa-mgmt-serving-cert                                    May 15 23:42:26 2024 GMT  openshift-storage                                           
OK      725     noobaa-operator-service-cert                                May 16 06:23:29 2024 GMT  openshift-storage                                           
OK      725     noobaa-s3-serving-cert                                      May 15 23:42:26 2024 GMT  openshift-storage                                           
OK      725     ocs-storagecluster-cos-ceph-rgw-tls-cert                    May 15 23:41:32 2024 GMT  openshift-storage                                           
OK      725     odf-console-serving-cert                                    May 15 23:27:38 2024 GMT  openshift-storage   

The output of the script is simple.  The first column contains the state of the certificate.  If its okay then it just says OK and if its expired the field will say EXPIRED.   The next column tells us how many days until the certificate expires and if the number is negative then the certificate is expired and has been for that many days.   The third column tells us the certificates name while the fourth gives us the actual expiry date.   Finally the last column provides the namespace the certificate is in.

Again just a simple script but provides an example of how we can see this type of information.  However if one has a fleet of clusters then configuring a Red Hat Advanced Cluster Management Certificate Policy Controller might be a more effective method at expired certificate management.

Friday, May 06, 2022

Mirroring Operators into Red Hat Quay

 


When dealing with disconnected spoke clusters that are being deployed by Red Hat Advanced Cluster Management we have to be aware that any operators that we want to install into our disconnected spoke clusters also need to be mirrored into our local Red Hat Quay registry for them to be accessible by the spoke cluster. In this blog we will mirror down the Red Hat Advanced Cluster Manager operator image components because we need the agent images that normally would get started on a spoke cluster so the spoke cluster can properly join the Red Hat Advanced Cluster Manager hub and report in its metrics and status. Note this procedure could be modified to pull in any of the operators that are normally visible in OpenShift's OperatorHub.

Before we get started we need to ensure we have the following tools available to use:  grpcurl, opm and podman.   To install grpcurl we need to retrieve the proper release binary from the following github repository and extract it:

$ wget -q -O - "https://github.com/fullstorydev/grpcurl/releases/download/v1.8.6/grpcurl_1.8.6_linux_x86_64.tar.gz" | sudo tar -C /usr/local/bin/ -xvz
LICENSE
grpcurl
[bschmaus@provisioning ~]$ which grpcurl
/usr/local/bin/grpcurl

Next we need to pull the latest opm binary from the OpenShift mirror (in this case 4.10) and extract it:

$ curl https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/latest-4.10/opm-linux.tar.gz | sudo tar -C /usr/local/bin/ -xvz
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
  0 23.7M    0  135k    0     0   151k      0  0:02:40 --:--:--  0:02:40  151kopm
100 23.7M  100 23.7M    0     0  7685k      0  0:00:03  0:00:03 --:--:-- 7685k
$ which opm
/usr/local/bin/opm

And finally we can use dnf install podman to install podman if it is not already there:

$ sudo dnf install podman
Updating Subscription Management repositories.
Last metadata expiration check: 1:32:12 ago on Fri 06 May 2022 11:22:36 AM CDT.
Package podman-1:3.4.2-9.module+el8.5.0+13852+150547f7.x86_64 is already installed.
Dependencies resolved.
Nothing to do.
Complete!
$ which podman
/usr/bin/podman 

Now that we have our tools ready we can begin the process of mirroring the Red Hat Advanced Cluster Management operator and its corresponding images.  The first step is to login to the source registry and the target registry.  In my case the source registry is registry.redhat.io and the target registry is my local Quay poc-registry-quay-quay-poc.apps.kni20.schmaustech.com:

$ podman login registry.redhat.io
Username: schmaustech
Password: 
Login Succeeded!

$ podman login poc-registry-quay-quay-poc.apps.kni20.schmaustech.com --tls-verify=false
Username: openshift
Password: 
Login Succeeded!

Next lets determine the list of packages we want to include in our pruned index of operators.  We already know we just want Red Hat Advanced Cluster Management but this will provide the background context on how to get all the available operators in the event one would like to mirror more.  We first need to start a source image index pod so we can extract out the list by executing the following:

$ podman run -p50051:50051 -it registry.redhat.io/redhat/redhat-operator-index:v4.10
WARN[0000] DEPRECATION NOTICE:
Sqlite-based catalogs and their related subcommands are deprecated. Support for
them will be removed in a future release. Please migrate your catalog workflows
to the new file-based catalog format. 
WARN[0000] unable to set termination log path            error="open /dev/termination-log: permission denied"
INFO[0000] Keeping server open for infinite seconds      database=/database/index.db port=50051
INFO[0000] serving registry                              database=/database/index.db port=50051

In another terminal window on the same host where the above podman command was run use grpcurl to extract out a list of operator packages and redirect it into packages.out file:

$ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out

The packages.out file that was generate will contain a listing of all the different operators that could be potentially mirrored.    If we go ahead and grep out cluster from the packages.out file we can see a few of those listing.   The specific one we are interested in is at the top of the list: advanced-cluster-management.

$ grep cluster packages.out 
  "name": "advanced-cluster-management"
  "name": "cluster-kube-descheduler-operator"
  "name": "cluster-logging"
  "name": "clusterresourceoverride"
  "name": "odf-multicluster-orchestrator"
  "name": "odr-cluster-operator"

Now that we have the exact name of the operator as listed in the operator index from registry.redhat.io we can now use that along with opm to generate our own operator registry index for our Red Hat Quay registry.   To do this we will use the opm index prune command and specify the listing of packages we want.  The output will create a local index:

$ opm index prune --from-index "registry.redhat.io/redhat/redhat-operator-index:v4.10" --packages 'advanced-cluster-management' --tag poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/olm-index/redhat-oprator-index:v4.10
WARN[0000] DEPRECATION NOTICE:
Sqlite-based catalogs and their related subcommands are deprecated. Support for
them will be removed in a future release. Please migrate your catalog workflows
to the new file-based catalog format. 
INFO[0000] pruning the index                             packages="[advanced-cluster-management]"
INFO[0000] Pulling previous image registry.redhat.io/redhat/redhat-operator-index:v4.10 to get metadata  packages="[advanced-cluster-management]"
INFO[0000] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.10  packages="[advanced-cluster-management]"
INFO[0022] running /usr/bin/podman pull registry.redhat.io/redhat/redhat-operator-index:v4.10  packages="[advanced-cluster-management]"
INFO[0024] Getting label data from previous image        packages="[advanced-cluster-management]"
INFO[0024] running podman inspect                        packages="[advanced-cluster-management]"
INFO[0024] running podman create                         packages="[advanced-cluster-management]"
INFO[0024] running podman cp                             packages="[advanced-cluster-management]"
INFO[0029] running podman rm                             packages="[advanced-cluster-management]"
INFO[0030] deleting packages                             pkg=3scale-operator
INFO[0030] packages: [3scale-operator]                   pkg=3scale-operator
INFO[0031] deleting packages                             pkg=amq-broker-rhel8
INFO[0031] packages: [amq-broker-rhel8]                  pkg=amq-broker-rhel8
INFO[0031] deleting packages                             pkg=amq-online
INFO[0031] packages: [amq-online]                        pkg=amq-online
INFO[0031] deleting packages                             pkg=amq-streams
INFO[0031] packages: [amq-streams]                       pkg=amq-streams
INFO[0031] deleting packages                             pkg=amq7-interconnect-operator
INFO[0031] packages: [amq7-interconnect-operator]        pkg=amq7-interconnect-operator
INFO[0031] deleting packages                             pkg=ansible-automation-platform-operator
INFO[0031] packages: [ansible-automation-platform-operator]  pkg=ansible-automation-platform-operator
INFO[0032] deleting packages                             pkg=ansible-cloud-addons-operator
INFO[0032] packages: [ansible-cloud-addons-operator]     pkg=ansible-cloud-addons-operator
INFO[0032] deleting packages                             pkg=apicast-operator
INFO[0032] packages: [apicast-operator]                  pkg=apicast-operator
INFO[0032] deleting packages                             pkg=aws-efs-csi-driver-operator
INFO[0032] packages: [aws-efs-csi-driver-operator]       pkg=aws-efs-csi-driver-operator
INFO[0032] deleting packages                             pkg=businessautomation-operator
INFO[0032] packages: [businessautomation-operator]       pkg=businessautomation-operator
INFO[0032] deleting packages                             pkg=cincinnati-operator
INFO[0032] packages: [cincinnati-operator]               pkg=cincinnati-operator
INFO[0032] deleting packages                             pkg=cluster-kube-descheduler-operator
INFO[0032] packages: [cluster-kube-descheduler-operator]  pkg=cluster-kube-descheduler-operator
INFO[0032] deleting packages                             pkg=cluster-logging
INFO[0032] packages: [cluster-logging]                   pkg=cluster-logging
INFO[0032] deleting packages                             pkg=clusterresourceoverride
INFO[0032] packages: [clusterresourceoverride]           pkg=clusterresourceoverride
INFO[0032] deleting packages                             pkg=codeready-workspaces
INFO[0032] packages: [codeready-workspaces]              pkg=codeready-workspaces
INFO[0032] deleting packages                             pkg=codeready-workspaces2
INFO[0032] packages: [codeready-workspaces2]             pkg=codeready-workspaces2
INFO[0032] deleting packages                             pkg=compliance-operator
INFO[0032] packages: [compliance-operator]               pkg=compliance-operator
INFO[0033] deleting packages                             pkg=container-security-operator
INFO[0033] packages: [container-security-operator]       pkg=container-security-operator
INFO[0033] deleting packages                             pkg=costmanagement-metrics-operator
INFO[0033] packages: [costmanagement-metrics-operator]   pkg=costmanagement-metrics-operator
INFO[0033] deleting packages                             pkg=cryostat-operator
INFO[0033] packages: [cryostat-operator]                 pkg=cryostat-operator
INFO[0033] deleting packages                             pkg=datagrid
INFO[0033] packages: [datagrid]                          pkg=datagrid
INFO[0033] deleting packages                             pkg=devworkspace-operator
INFO[0033] packages: [devworkspace-operator]             pkg=devworkspace-operator
INFO[0033] deleting packages                             pkg=dpu-network-operator
INFO[0033] packages: [dpu-network-operator]              pkg=dpu-network-operator
INFO[0033] deleting packages                             pkg=eap
INFO[0033] packages: [eap]                               pkg=eap
INFO[0033] deleting packages                             pkg=elasticsearch-operator
INFO[0033] packages: [elasticsearch-operator]            pkg=elasticsearch-operator
INFO[0033] deleting packages                             pkg=external-dns-operator
INFO[0033] packages: [external-dns-operator]             pkg=external-dns-operator
INFO[0033] deleting packages                             pkg=file-integrity-operator
INFO[0033] packages: [file-integrity-operator]           pkg=file-integrity-operator
INFO[0033] deleting packages                             pkg=fuse-apicurito
INFO[0033] packages: [fuse-apicurito]                    pkg=fuse-apicurito
INFO[0033] deleting packages                             pkg=fuse-console
INFO[0033] packages: [fuse-console]                      pkg=fuse-console
INFO[0033] deleting packages                             pkg=fuse-online
INFO[0033] packages: [fuse-online]                       pkg=fuse-online
INFO[0033] deleting packages                             pkg=gatekeeper-operator-product
INFO[0033] packages: [gatekeeper-operator-product]       pkg=gatekeeper-operator-product
INFO[0033] deleting packages                             pkg=idp-mgmt-operator-product
INFO[0033] packages: [idp-mgmt-operator-product]         pkg=idp-mgmt-operator-product
INFO[0033] deleting packages                             pkg=integration-operator
INFO[0033] packages: [integration-operator]              pkg=integration-operator
INFO[0033] deleting packages                             pkg=jaeger-product
INFO[0033] packages: [jaeger-product]                    pkg=jaeger-product
INFO[0033] deleting packages                             pkg=jws-operator
INFO[0033] packages: [jws-operator]                      pkg=jws-operator
INFO[0033] deleting packages                             pkg=kiali-ossm
INFO[0033] packages: [kiali-ossm]                        pkg=kiali-ossm
INFO[0033] deleting packages                             pkg=klusterlet-product
INFO[0033] packages: [klusterlet-product]                pkg=klusterlet-product
INFO[0033] deleting packages                             pkg=kubernetes-nmstate-operator
INFO[0033] packages: [kubernetes-nmstate-operator]       pkg=kubernetes-nmstate-operator
INFO[0033] deleting packages                             pkg=kubevirt-hyperconverged
INFO[0033] packages: [kubevirt-hyperconverged]           pkg=kubevirt-hyperconverged
INFO[0034] deleting packages                             pkg=local-storage-operator
INFO[0034] packages: [local-storage-operator]            pkg=local-storage-operator
INFO[0034] deleting packages                             pkg=loki-operator
INFO[0034] packages: [loki-operator]                     pkg=loki-operator
INFO[0034] deleting packages                             pkg=mcg-operator
INFO[0034] packages: [mcg-operator]                      pkg=mcg-operator
INFO[0034] deleting packages                             pkg=metallb-operator
INFO[0034] packages: [metallb-operator]                  pkg=metallb-operator
INFO[0034] deleting packages                             pkg=mtc-operator
INFO[0034] packages: [mtc-operator]                      pkg=mtc-operator
INFO[0034] deleting packages                             pkg=mtv-operator
INFO[0034] packages: [mtv-operator]                      pkg=mtv-operator
INFO[0034] deleting packages                             pkg=nfd
INFO[0034] packages: [nfd]                               pkg=nfd
INFO[0034] deleting packages                             pkg=node-healthcheck-operator
INFO[0034] packages: [node-healthcheck-operator]         pkg=node-healthcheck-operator
INFO[0034] deleting packages                             pkg=node-maintenance-operator
INFO[0034] packages: [node-maintenance-operator]         pkg=node-maintenance-operator
INFO[0034] deleting packages                             pkg=numaresources-operator
INFO[0034] packages: [numaresources-operator]            pkg=numaresources-operator
INFO[0034] deleting packages                             pkg=ocs-operator
INFO[0034] packages: [ocs-operator]                      pkg=ocs-operator
INFO[0034] deleting packages                             pkg=odf-csi-addons-operator
INFO[0034] packages: [odf-csi-addons-operator]           pkg=odf-csi-addons-operator
INFO[0034] deleting packages                             pkg=odf-lvm-operator
INFO[0034] packages: [odf-lvm-operator]                  pkg=odf-lvm-operator
INFO[0034] deleting packages                             pkg=odf-multicluster-orchestrator
INFO[0034] packages: [odf-multicluster-orchestrator]     pkg=odf-multicluster-orchestrator
INFO[0034] deleting packages                             pkg=odf-operator
INFO[0034] packages: [odf-operator]                      pkg=odf-operator
INFO[0034] deleting packages                             pkg=odr-cluster-operator
INFO[0034] packages: [odr-cluster-operator]              pkg=odr-cluster-operator
INFO[0034] deleting packages                             pkg=odr-hub-operator
INFO[0034] packages: [odr-hub-operator]                  pkg=odr-hub-operator
INFO[0034] deleting packages                             pkg=openshift-cert-manager-operator
INFO[0034] packages: [openshift-cert-manager-operator]   pkg=openshift-cert-manager-operator
INFO[0034] deleting packages                             pkg=openshift-gitops-operator
INFO[0034] packages: [openshift-gitops-operator]         pkg=openshift-gitops-operator
INFO[0034] deleting packages                             pkg=openshift-pipelines-operator-rh
INFO[0034] packages: [openshift-pipelines-operator-rh]   pkg=openshift-pipelines-operator-rh
INFO[0034] deleting packages                             pkg=openshift-secondary-scheduler-operator
INFO[0034] packages: [openshift-secondary-scheduler-operator]  pkg=openshift-secondary-scheduler-operator
INFO[0034] deleting packages                             pkg=openshift-special-resource-operator
INFO[0034] packages: [openshift-special-resource-operator]  pkg=openshift-special-resource-operator
INFO[0034] deleting packages                             pkg=opentelemetry-product
INFO[0034] packages: [opentelemetry-product]             pkg=opentelemetry-product
INFO[0034] deleting packages                             pkg=performance-addon-operator
INFO[0034] packages: [performance-addon-operator]        pkg=performance-addon-operator
INFO[0034] deleting packages                             pkg=poison-pill-manager
INFO[0034] packages: [poison-pill-manager]               pkg=poison-pill-manager
INFO[0034] deleting packages                             pkg=ptp-operator
INFO[0034] packages: [ptp-operator]                      pkg=ptp-operator
INFO[0034] deleting packages                             pkg=quay-bridge-operator
INFO[0034] packages: [quay-bridge-operator]              pkg=quay-bridge-operator
INFO[0034] deleting packages                             pkg=quay-operator
INFO[0034] packages: [quay-operator]                     pkg=quay-operator
INFO[0034] deleting packages                             pkg=red-hat-camel-k
INFO[0034] packages: [red-hat-camel-k]                   pkg=red-hat-camel-k
INFO[0034] deleting packages                             pkg=redhat-oadp-operator
INFO[0034] packages: [redhat-oadp-operator]              pkg=redhat-oadp-operator
INFO[0034] deleting packages                             pkg=rh-service-binding-operator
INFO[0034] packages: [rh-service-binding-operator]       pkg=rh-service-binding-operator
INFO[0034] deleting packages                             pkg=rhacs-operator
INFO[0034] packages: [rhacs-operator]                    pkg=rhacs-operator
INFO[0034] deleting packages                             pkg=rhpam-kogito-operator
INFO[0034] packages: [rhpam-kogito-operator]             pkg=rhpam-kogito-operator
INFO[0035] deleting packages                             pkg=rhsso-operator
INFO[0035] packages: [rhsso-operator]                    pkg=rhsso-operator
INFO[0035] deleting packages                             pkg=sandboxed-containers-operator
INFO[0035] packages: [sandboxed-containers-operator]     pkg=sandboxed-containers-operator
INFO[0035] deleting packages                             pkg=serverless-operator
INFO[0035] packages: [serverless-operator]               pkg=serverless-operator
INFO[0035] deleting packages                             pkg=service-registry-operator
INFO[0035] packages: [service-registry-operator]         pkg=service-registry-operator
INFO[0035] deleting packages                             pkg=servicemeshoperator
INFO[0035] packages: [servicemeshoperator]               pkg=servicemeshoperator
INFO[0035] deleting packages                             pkg=skupper-operator
INFO[0035] packages: [skupper-operator]                  pkg=skupper-operator
INFO[0035] deleting packages                             pkg=sriov-network-operator
INFO[0035] packages: [sriov-network-operator]            pkg=sriov-network-operator
INFO[0035] deleting packages                             pkg=submariner
INFO[0035] packages: [submariner]                        pkg=submariner
INFO[0035] deleting packages                             pkg=tang-operator
INFO[0035] packages: [tang-operator]                     pkg=tang-operator
INFO[0035] deleting packages                             pkg=vertical-pod-autoscaler
INFO[0035] packages: [vertical-pod-autoscaler]           pkg=vertical-pod-autoscaler
INFO[0035] deleting packages                             pkg=web-terminal
INFO[0035] packages: [web-terminal]                      pkg=web-terminal
INFO[0035] deleting packages                             pkg=windows-machine-config-operator
INFO[0035] packages: [windows-machine-config-operator]   pkg=windows-machine-config-operator
INFO[0035] Generating dockerfile                         packages="[advanced-cluster-management]"
INFO[0035] writing dockerfile: ./index.Dockerfile2850553610  packages="[advanced-cluster-management]"
INFO[0035] running podman build                          packages="[advanced-cluster-management]"
INFO[0035] [podman build --format docker -f ./index.Dockerfile2850553610 -t poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/olm-index/redhat-oprator-index:v4.10 .]  packages="[advanced-cluster-management]" 

With the index created we can now push it up to our Red Hat Quay registry with the podman push command.  It should be noted that in this example we are pushing into the rhacm2 organization and that must exist before attempting the push.

$ podman push poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/olm-index/redhat-oprator-index:v4.10 --tls-verify=false
Getting image source signatures
Copying blob 0d6867937695 done  
Copying blob eeaf5a4136cb done  
Copying blob 9dc1e45bb9ee done  
Copying blob 457de0330aa6 done  
Copying blob 324075f0d95e done  
Copying blob 5b1fa8e3e100 done  
Copying config f6bfd86300 done  
Writing manifest to image destination
Storing signatures

Once we have pushed the index up we can then use the oc adm catalog mirror command to mirror the images:

$ oc adm catalog mirror poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/olm-index/redhat-oprator-index:v4.10 poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2 -a /home/bschmaus/quay-merged-pull-secret.json --insecure

!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
!! DEPRECATION NOTICE:
!!   Sqlite-based catalogs are deprecated. Support for them will be removed in a
!!   future release. Please migrate your catalog workflows to the new file-based
!!   catalog format.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

src image has index label for database path: /database/index.db
using index path mapping: /database/index.db:/tmp/3687994887
wrote database to /tmp/3687994887
using database at: /tmp/3687994887/index.db
poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/
  rhacm2/openshift4-ose-configmap-reloader
    blobs:
      registry.redhat.io/openshift4/ose-configmap-reloader sha256:b77bb434db5a2c43574630adfbe80aa3b36c179ccc20541ae91e2812a3ad9ce2 1.461KiB
      registry.redhat.io/openshift4/ose-configmap-reloader sha256:d8dcf8c7f6920565fec6db5c1479312aad177148dadd89e07f838cd7f44fa074 1.474KiB
      registry.redhat.io/openshift4/ose-configmap-reloader sha256:7d2d8330490119f01d1087adb98a324b3292f4711436cc4f64a8d9cb081fc345 1.479KiB
(...)
uploading: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:cf7ba91a4dc5dbc52d2fd9c09127be351a3d5292f1f2edf5171d2fe79f573850 18.93MiB
uploading: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:83b47d1b1652022425cbee522218c3758b8c59b052d5e845d5f8d897e31609a7 18.93MiB
mounted: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:2a99c93da16827d9a6254f86f495d2c72c62a916f9c398577577221d35d2c790 37.81MiB
mounted: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:d46336f50433ab27336fad8f9b251b2f68a66d376c902dfca23a6851acae502c 37.47MiB
mounted: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:0016483f9a1476d5d57b7a871ed4d3994ba802c643f53f6037d2b28f799f963f 35.94MiB
uploading: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:59f461f3c96d9c3b9d77d9de5fc4df45e4f6965264b2d047d7474d130ca8f9b4 18.93MiB
mounted: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:a9e23b64ace00a199db21d302292b434e9d3956d79319d958ecc19603d00c946 37.79MiB
uploading: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:30ee6976ea1e5387884dd0299fd122b76ec6e1d4fdad01d01302997f2d461edc 18.63MiB
uploading: poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-klusterlet-addon-controller-rhel8 sha256:c4e367c519079053c4297b06640529659c996e21823f2c580e533468b26a2de7 19.07MiB
sha256:acedcbb6483e2b6b51f69900de4f582f48a486114ef6ecaede82f1f549fb4ebf poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-acm-must-gather-rhel8
(...)
sha256:bcf26708e40297fcc5c09657aa540408930e51e7319ec7612fb5529936746cc0 poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-search-api-rhel8:2290f420
sha256:fcfbd48e615e46fe5d33e3059aedbc2a75f4f10489dbe91fa072610dbbe86130 poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-search-api-rhel8:4ad02099
sha256:601f4d74ece9da8888488d303e05220dc9cd9796b07a95cf2001d3a42198b8de poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-search-api-rhel8:2e28486f
sha256:623281445ffba2f86d4d2708e9aaa84b566a461e9bf29f703589cdb27492dcd3 poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/rhacm2-search-api-rhel8:7fa459f6
info: Mirroring completed in 16m46.55s (44.49MB/s)
no digest mapping available for poc-registry-quay-quay-poc.apps.kni20.schmaustech.com/rhacm2/olm-index/redhat-operator-index:v4.10, skip writing to ImageContentSourcePolicy
wrote mirroring manifests to manifests-olm-index/redhat-operator-index-1651068403
deleted dir /tmp/2002258786

This concludes the demonstration of how to mirror specific operators down to ones own instance of Red Hat Quay.

Thursday, October 28, 2021

Cluster Infrastructure Management with Red Hat Advanced Cluster Management for Kubernetes

 


In Red Hat Advanced Cluster Management for Kubernetes 2.4 there is a new component in technology preview called central infrastructure management.   This component allows a separate interface for an infrastructure administrator and a cluster creator.  From the infrastructure admin perspective it allows for management of on-premise compute resources across different data centers and/or locations.  Meanwhile once those compute resources have been identified it allows the cluster creators who might be part of a different Dev/Ops team to consume and allocate those resources for new OpenShift clusters.  The following video demonstrates a walk through on what that process looks like:



Saturday, May 01, 2021

Configuring Noobaa S3 Storage for Red Hat Advanced Cluster Management Observability

 


Nevermind other storage vendors, Noobaa in OpenShift Container Storage (OCS) can provide all the object storage needs Red Hat Advanced Cluster Management Observability ever needed.   In the following blog I will demonstrate how to configure the Noobaa backend in OCS to be used by Red Hat Advanced Cluster Management Observability.

Red Hat Advanced Cluster Management consists of several multicluster components, which are used to access and manage a fleet of OpenShift clusters.  With the observability service enabled, you can use Red Hat Advanced Cluster Management to gain insight about and optimize a fleet managed clusters.

First lets discuss some assumptions I make about the setup:

-This is a 3 master 3 (or more) worker OpenShift cluster

-OCP 4.6.19 (or higher) w/ OCS 4.6.4 in a hyperconverged configuration

-RHACM 2.2.2 is installed on the same cluster

With the stated above assumptions lets move onto configuring a Noobaa object bucket.   The first thing we need to do is create a resource yaml file that will create our object bucket.  The below is an example:

$ cat << EOF > ~/noobaa-object-storage.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: obc-schmaustech
spec:
  generateBucketName: obc-schmaustech-bucket
  storageClassName: openshift-storage.noobaa.io
EOF

Once we have created our object bucket resource yaml we need to go ahead and create in our OpenShift cluster with following command:

$ oc create -f ~/noobaa-object-storage.yaml
objectbucketclaim.objectbucket.io/obc-schmaustech created


Once the object bucket resource is created we can see it by listing current object buckets:

$ oc get objectbucket
NAME                          STORAGE-CLASS                 CLAIM-NAMESPACE   CLAIM-NAME        RECLAIM-POLICY   PHASE   AGE
obc-default-obc-schmaustech   openshift-storage.noobaa.io   default           obc-schmaustech   Delete           Bound   30s

There are some bits of information we need to gather from the object bucket that we created which we will need to configure our thanos-object-bucket resource yaml required for our Observability configuration.  Those bits are found by describing the object bucket we created and the object buckets secret.   First lets look at the object bucket itself:

$ oc describe objectbucket obc-default-obc-schmaustech
Name:         obc-default-obc-schmaustech
Namespace:    
Labels:       app=noobaa
              bucket-provisioner=openshift-storage.noobaa.io-obc
              noobaa-domain=openshift-storage.noobaa.io
Annotations:  <none>
API Version:  objectbucket.io/v1alpha1
Kind:         ObjectBucket
Metadata:
  Creation Timestamp:  2021-05-01T00:12:54Z
  Finalizers:
    objectbucket.io/finalizer
  Generation:  1
  Managed Fields:
    API Version:  objectbucket.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"objectbucket.io/finalizer":
        f:labels:
          .:
          f:app:
          f:bucket-provisioner:
          f:noobaa-domain:
      f:spec:
        .:
        f:additionalState:
          .:
          f:account:
          f:bucketclass:
          f:bucketclassgeneration:
        f:claimRef:
          .:
          f:apiVersion:
          f:kind:
          f:name:
          f:namespace:
          f:uid:
        f:endpoint:
          .:
          f:additionalConfig:
          f:bucketHost:
          f:bucketName:
          f:bucketPort:
          f:region:
          f:subRegion:
        f:reclaimPolicy:
        f:storageClassName:
      f:status:
        .:
        f:phase:
    Manager:         noobaa-operator
    Operation:       Update
    Time:            2021-05-01T00:12:54Z
  Resource Version:  4864265
  Self Link:         /apis/objectbucket.io/v1alpha1/objectbuckets/obc-default-obc-schmaustech
  UID:               9c7eddae-4453-439b-826f-f226513d78f4
Spec:
  Additional State:
    Account:                obc-account.obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e.608c9d05@noobaa.io
    Bucketclass:            noobaa-default-bucket-class
    Bucketclassgeneration:  1
  Claim Ref:
    API Version:  objectbucket.io/v1alpha1
    Kind:         ObjectBucketClaim
    Name:         obc-schmaustech
    Namespace:    default
    UID:          e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
  Endpoint:
    Additional Config:
    Bucket Host:       s3.openshift-storage.svc
    Bucket Name:       obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e
    Bucket Port:       443
    Region:            
    Sub Region:        
  Reclaim Policy:      Delete
  Storage Class Name:  openshift-storage.noobaa.io
Status:
  Phase:  Bound
Events:   <none>

In the object bucket describe output we are specifically interested in the bucket name and the bucket host.  Below lets capture the bucket name and assign it to a variable and then echo it out to confirm the variable was set correctly:

$ BUCKET_NAME=`oc describe objectbucket obc-default-obc-schmaustech|grep 'Bucket Name'|cut -d: -f2|tr -d " "`
$echo $BUCKET_NAME
obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e

Lets do the same thing for the bucket host information.  Again we will assign it to a variable and then echo the variable to confirm it was set correctly:

$ BUCKET_HOST=`oc describe objectbucket obc-default-obc-schmaustech|grep 'Bucket Host'|cut -d: -f2|tr -d " "`
$ echo $BUCKET_HOST
s3.openshift-storage.svc

After we gathered the bucket name and bucket host name we need to also get the access and secret keys for our bucket.  These are stored in a secret file which will have the same name as the metadata name defined in our original object bucket resource file we created above.  In our example the metadata name was obc-schmaustech.  Lets show that secret below:

$ oc get secret obc-schmaustech
NAME              TYPE     DATA   AGE
obc-schmaustech   Opaque   2      117s

The access and secret keys will be visible in the contents of the secret resource and we can visually see them if we get the secret but also ask for the yaml version of the output as we have done below:

$ oc get secret obc-schmaustech -o yaml
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: V3M2TmpGdWVLd3Vjb2VoTHZVTUo=
  AWS_SECRET_ACCESS_KEY: ck4vOTBaM2NkZWJvOVJLQStaYlBsK3VveWZOYmFpN0s0OU5KRFVKag==
kind: Secret
metadata:
  creationTimestamp: "2021-05-01T00:12:54Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
    app: noobaa
    bucket-provisioner: openshift-storage.noobaa.io-obc
    noobaa-domain: openshift-storage.noobaa.io
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:AWS_ACCESS_KEY_ID: {}
        f:AWS_SECRET_ACCESS_KEY: {}
      f:metadata:
        f:finalizers:
          .: {}
          v:"objectbucket.io/finalizer": {}
        f:labels:
          .: {}
          f:app: {}
          f:bucket-provisioner: {}
          f:noobaa-domain: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"e123d2c8-2f9d-4f39-9a83-ede316b8a5fe"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:type: {}
    manager: noobaa-operator
    operation: Update
    time: "2021-05-01T00:12:54Z"
  name: obc-schmaustech
  namespace: default
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ObjectBucketClaim
    name: obc-schmaustech
    uid: e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
  resourceVersion: "4864261"
  selfLink: /api/v1/namespaces/default/secrets/obc-schmaustech
  uid: eda5cd99-dc57-4c7b-acf3-377343d6fef8
type: Opaque

The access and secret keys are base64 encoded so we need to ensure we gather them from a decoded perspective.  As we did with the bucket name and bucket host we will assign them to variables.  First lets pull out the access key from the yaml, decode it and then assign it to a variable and confirm the variable has the access key content:

$ AWS_ACCESS_KEY_ID=`oc get secret obc-schmaustech -o yaml|grep -m1 AWS_ACCESS_KEY_ID|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_ACCESS_KEY_ID
Ws6NjFueKwucoehLvUMJ

We will do the same for the secret key and verify again:

$ AWS_SECRET_ACCESS_KEY=`oc get secret obc-schmaustech -o yaml|grep -m1 AWS_SECRET_ACCESS_KEY|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_SECRET_ACCESS_KEY
rN/90Z3cdebo9RKA+ZbPl+uoyfNbai7K49NJDUJj

Now that we have our four variables which contain the values for the bucket name, bucket host, access key and secret key we are now ready to create our thanos-object-storage resource yaml file which we will need to start the configuration and deployment of the Red Hat Advanced Cluster Management Observability component.  This file provides the observability service the information about the S3 object storage.   Below is how we can create the file noting that the variables will substitute in the values for resource definition:

$ cat << EOF > ~/thanos-object-storage.yaml
apiVersion: v1
kind: Secret
metadata:
  name: thanos-object-storage
type: Opaque
stringData:
  thanos.yaml: |
    type: s3
    config:
      bucket: $BUCKET_NAME
      endpoint: $BUCKET_HOST
      insecure: false
      access_key: $AWS_ACCESS_KEY_ID
      secret_key: $AWS_SECRET_ACCESS_KEY
      trace:
        enable: true
      http_config:
        insecure_skip_verify: true
EOF

Once we have the definition created we can go ahead and create the open-cluster-management-observability namespace:

$ oc create namespace open-cluster-management-observability
namespace/open-cluster-management-observability created

Next we want to assign the clusters pull-secret to the docker config json variable:

$ DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`
# .dockerconfigjson

At this point we can go ahead and create the thanos-object-storage resource from the yaml file we created:

$ oc create -f thanos-object-storage.yaml -n open-cluster-management-observability
secret/thanos-object-storage created

Once the thanos-object-storage resource is created we can create a multiclusterobservability resource yaml file like the example below.  Notice that it references the thanos-object-storage resource we created above:

$ cat << EOF > ~/multiclusterobservability_cr.yaml
apiVersion: observability.open-cluster-management.io/v1beta1
kind: MultiClusterObservability
metadata:
  name: observability #Your customized name of MulticlusterObservability CR
spec:
  availabilityConfig: High             # Available values are High or Basic
  imagePullPolicy: Always
  imagePullSecret: multiclusterhub-operator-pull-secret
  observabilityAddonSpec:              # The ObservabilityAddonSpec is the global settings for all managed clusters
    enableMetrics: true                # EnableMetrics indicates the observability addon push metrics to hub server
    interval: 60                       # Interval for the observability addon push metrics to hub server
  retentionResolution1h: 5d            # How long to retain samples of 1 hour in bucket
  retentionResolution5m: 3d
  retentionResolutionRaw: 1d
  storageConfigObject:                 # Specifies the storage to be used by Observability
    metricObjectStorage:
      name: thanos-object-storage
      key: thanos.yaml
EOF

We can cat out the file to confirm it looks correct:

$ cat multiclusterobservability_cr.yaml
apiVersion: observability.open-cluster-management.io/v1beta1
kind: MultiClusterObservability
metadata:
  name: observability #Your customized name of MulticlusterObservability CR
spec:
  availabilityConfig: High             # Available values are High or Basic
  imagePullPolicy: Always
  imagePullSecret: multiclusterhub-operator-pull-secret
  observabilityAddonSpec:              # The ObservabilityAddonSpec is the global settings for all managed clusters
    enableMetrics: true                # EnableMetrics indicates the observability addon push metrics to hub server
    interval: 60                       # Interval for the observability addon push metrics to hub server
  retentionResolution1h: 5d            # How long to retain samples of 1 hour in bucket
  retentionResolution5m: 3d
  retentionResolutionRaw: 1d
  storageConfigObject:                 # Specifies the storage to be used by Observability
    metricObjectStorage:
      name: thanos-object-storage
      key: thanos.yaml

At this point we can double check that nothing is running under the open-cluster-management-observability namespace:

$ oc get pods -n open-cluster-management-observability
No resources found in open-cluster-management-observability namespace.


Once confirmed there are no resource running we can apply the multiclusterobservability resource file we created to start the deployment of the observability components:

$ oc apply -f multiclusterobservability_cr.yaml
multiclusterobservability.observability.open-cluster-management.io/observability created

It will take a few minutes for the associated pods to come up but after a few minutes if we look for the pods under the open-cluster-management-observability namespace we should see the following:

$  oc get pods -n open-cluster-management-observability
NAME                                                              READY   STATUS    RESTARTS   AGE
alertmanager-0                                                    2/2     Running   0          97s
alertmanager-1                                                    2/2     Running   0          73s
alertmanager-2                                                    2/2     Running   0          57s
grafana-546fb568b4-bqn22                                          2/2     Running   0          97s
grafana-546fb568b4-hxpcz                                          2/2     Running   0          97s
observability-observatorium-observatorium-api-85cf58bd8d-nlpxf    1/1     Running   0          74s
observability-observatorium-observatorium-api-85cf58bd8d-qtm98    1/1     Running   0          74s
observability-observatorium-thanos-compact-0                      1/1     Running   0          74s
observability-observatorium-thanos-query-58dc8c8ccb-4p6l8         1/1     Running   0          74s
observability-observatorium-thanos-query-58dc8c8ccb-6tmvd         1/1     Running   0          74s
observability-observatorium-thanos-query-frontend-f8869cdf66c2c   1/1     Running   0          74s
observability-observatorium-thanos-query-frontend-f8869cdfstwrg   1/1     Running   0          75s
observability-observatorium-thanos-receive-controller-56c9x6tt5   1/1     Running   0          74s
observability-observatorium-thanos-receive-default-0              1/1     Running   0          74s
observability-observatorium-thanos-receive-default-1              1/1     Running   0          56s
observability-observatorium-thanos-receive-default-2              1/1     Running   0          37s
observability-observatorium-thanos-rule-0                         2/2     Running   0          74s
observability-observatorium-thanos-rule-1                         2/2     Running   0          49s
observability-observatorium-thanos-rule-2                         2/2     Running   0          32s
observability-observatorium-thanos-store-memcached-0              2/2     Running   0          74s
observability-observatorium-thanos-store-memcached-1              2/2     Running   0          70s
observability-observatorium-thanos-store-memcached-2              2/2     Running   0          66s
observability-observatorium-thanos-store-shard-0-0                1/1     Running   0          75s
observability-observatorium-thanos-store-shard-1-0                1/1     Running   0          74s
observability-observatorium-thanos-store-shard-2-0                1/1     Running   0          75s
observatorium-operator-797ddbd9d-kqpm6                            1/1     Running   0          98s
rbac-query-proxy-769b5dbcc5-qprrr                                 1/1     Running   0          85s
rbac-query-proxy-769b5dbcc5-s5rbm                                 1/1     Running   0          91s

With the pods running under the open-cluster-management-observability namespace we can now confirm that the observability service is running by logging into the Red Hat Advanced Cluster Management console and going to observe environments.  In the upper right hand corner of the screen a Grafana link should now be present like in the screenshot below: 


Once you click on the Grafana link the following observability dashboard will appear and it even already be showing metrics from the cluster collections:


If we click on the CPU metric we can even see the breakdown of what is using the CPU of the local-cluster:


At this point we can conclude that the Red Hat Advanced Cluster Management Observability component is installed successfully and using the Noobaa S3 object bucket we created.

Friday, April 30, 2021

Using Red Hat Advanced Cluster Management to Deploy OpenShift Baremetal IPI Cluster

 


The following video is a demonstration on how to use Red Hat Advanced Cluster Management to deploy an OpenShift cluster via the baremetal IPI deployment process.   The video will explain the various required fields and show the process end to end.