Saturday, May 01, 2021

Configuring Noobaa S3 Storage for Red Hat Advanced Cluster Management Observability

 


Nevermind other storage vendors, Noobaa in OpenShift Container Storage (OCS) can provide all the object storage needs Red Hat Advanced Cluster Management Observability ever needed.   In the following blog I will demonstrate how to configure the Noobaa backend in OCS to be used by Red Hat Advanced Cluster Management Observability.

Red Hat Advanced Cluster Management consists of several multicluster components, which are used to access and manage a fleet of OpenShift clusters.  With the observability service enabled, you can use Red Hat Advanced Cluster Management to gain insight about and optimize a fleet managed clusters.

First lets discuss some assumptions I make about the setup:

-This is a 3 master 3 (or more) worker OpenShift cluster

-OCP 4.6.19 (or higher) w/ OCS 4.6.4 in a hyperconverged configuration

-RHACM 2.2.2 is installed on the same cluster

With the stated above assumptions lets move onto configuring a Noobaa object bucket.   The first thing we need to do is create a resource yaml file that will create our object bucket.  The below is an example:

$ cat << EOF > ~/noobaa-object-storage.yaml
apiVersion: objectbucket.io/v1alpha1
kind: ObjectBucketClaim
metadata:
  name: obc-schmaustech
spec:
  generateBucketName: obc-schmaustech-bucket
  storageClassName: openshift-storage.noobaa.io
EOF

Once we have created our object bucket resource yaml we need to go ahead and create in our OpenShift cluster with following command:

$ oc create -f ~/noobaa-object-storage.yaml
objectbucketclaim.objectbucket.io/obc-schmaustech created


Once the object bucket resource is created we can see it by listing current object buckets:

$ oc get objectbucket
NAME                          STORAGE-CLASS                 CLAIM-NAMESPACE   CLAIM-NAME        RECLAIM-POLICY   PHASE   AGE
obc-default-obc-schmaustech   openshift-storage.noobaa.io   default           obc-schmaustech   Delete           Bound   30s

There are some bits of information we need to gather from the object bucket that we created which we will need to configure our thanos-object-bucket resource yaml required for our Observability configuration.  Those bits are found by describing the object bucket we created and the object buckets secret.   First lets look at the object bucket itself:

$ oc describe objectbucket obc-default-obc-schmaustech
Name:         obc-default-obc-schmaustech
Namespace:    
Labels:       app=noobaa
              bucket-provisioner=openshift-storage.noobaa.io-obc
              noobaa-domain=openshift-storage.noobaa.io
Annotations:  <none>
API Version:  objectbucket.io/v1alpha1
Kind:         ObjectBucket
Metadata:
  Creation Timestamp:  2021-05-01T00:12:54Z
  Finalizers:
    objectbucket.io/finalizer
  Generation:  1
  Managed Fields:
    API Version:  objectbucket.io/v1alpha1
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"objectbucket.io/finalizer":
        f:labels:
          .:
          f:app:
          f:bucket-provisioner:
          f:noobaa-domain:
      f:spec:
        .:
        f:additionalState:
          .:
          f:account:
          f:bucketclass:
          f:bucketclassgeneration:
        f:claimRef:
          .:
          f:apiVersion:
          f:kind:
          f:name:
          f:namespace:
          f:uid:
        f:endpoint:
          .:
          f:additionalConfig:
          f:bucketHost:
          f:bucketName:
          f:bucketPort:
          f:region:
          f:subRegion:
        f:reclaimPolicy:
        f:storageClassName:
      f:status:
        .:
        f:phase:
    Manager:         noobaa-operator
    Operation:       Update
    Time:            2021-05-01T00:12:54Z
  Resource Version:  4864265
  Self Link:         /apis/objectbucket.io/v1alpha1/objectbuckets/obc-default-obc-schmaustech
  UID:               9c7eddae-4453-439b-826f-f226513d78f4
Spec:
  Additional State:
    Account:                obc-account.obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e.608c9d05@noobaa.io
    Bucketclass:            noobaa-default-bucket-class
    Bucketclassgeneration:  1
  Claim Ref:
    API Version:  objectbucket.io/v1alpha1
    Kind:         ObjectBucketClaim
    Name:         obc-schmaustech
    Namespace:    default
    UID:          e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
  Endpoint:
    Additional Config:
    Bucket Host:       s3.openshift-storage.svc
    Bucket Name:       obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e
    Bucket Port:       443
    Region:            
    Sub Region:        
  Reclaim Policy:      Delete
  Storage Class Name:  openshift-storage.noobaa.io
Status:
  Phase:  Bound
Events:   <none>

In the object bucket describe output we are specifically interested in the bucket name and the bucket host.  Below lets capture the bucket name and assign it to a variable and then echo it out to confirm the variable was set correctly:

$ BUCKET_NAME=`oc describe objectbucket obc-default-obc-schmaustech|grep 'Bucket Name'|cut -d: -f2|tr -d " "`
$echo $BUCKET_NAME
obc-schmaustech-bucket-f6508472-4ba6-405d-9e39-881b45a7344e

Lets do the same thing for the bucket host information.  Again we will assign it to a variable and then echo the variable to confirm it was set correctly:

$ BUCKET_HOST=`oc describe objectbucket obc-default-obc-schmaustech|grep 'Bucket Host'|cut -d: -f2|tr -d " "`
$ echo $BUCKET_HOST
s3.openshift-storage.svc

After we gathered the bucket name and bucket host name we need to also get the access and secret keys for our bucket.  These are stored in a secret file which will have the same name as the metadata name defined in our original object bucket resource file we created above.  In our example the metadata name was obc-schmaustech.  Lets show that secret below:

$ oc get secret obc-schmaustech
NAME              TYPE     DATA   AGE
obc-schmaustech   Opaque   2      117s

The access and secret keys will be visible in the contents of the secret resource and we can visually see them if we get the secret but also ask for the yaml version of the output as we have done below:

$ oc get secret obc-schmaustech -o yaml
apiVersion: v1
data:
  AWS_ACCESS_KEY_ID: V3M2TmpGdWVLd3Vjb2VoTHZVTUo=
  AWS_SECRET_ACCESS_KEY: ck4vOTBaM2NkZWJvOVJLQStaYlBsK3VveWZOYmFpN0s0OU5KRFVKag==
kind: Secret
metadata:
  creationTimestamp: "2021-05-01T00:12:54Z"
  finalizers:
  - objectbucket.io/finalizer
  labels:
    app: noobaa
    bucket-provisioner: openshift-storage.noobaa.io-obc
    noobaa-domain: openshift-storage.noobaa.io
  managedFields:
  - apiVersion: v1
    fieldsType: FieldsV1
    fieldsV1:
      f:data:
        .: {}
        f:AWS_ACCESS_KEY_ID: {}
        f:AWS_SECRET_ACCESS_KEY: {}
      f:metadata:
        f:finalizers:
          .: {}
          v:"objectbucket.io/finalizer": {}
        f:labels:
          .: {}
          f:app: {}
          f:bucket-provisioner: {}
          f:noobaa-domain: {}
        f:ownerReferences:
          .: {}
          k:{"uid":"e123d2c8-2f9d-4f39-9a83-ede316b8a5fe"}:
            .: {}
            f:apiVersion: {}
            f:blockOwnerDeletion: {}
            f:controller: {}
            f:kind: {}
            f:name: {}
            f:uid: {}
      f:type: {}
    manager: noobaa-operator
    operation: Update
    time: "2021-05-01T00:12:54Z"
  name: obc-schmaustech
  namespace: default
  ownerReferences:
  - apiVersion: objectbucket.io/v1alpha1
    blockOwnerDeletion: true
    controller: true
    kind: ObjectBucketClaim
    name: obc-schmaustech
    uid: e123d2c8-2f9d-4f39-9a83-ede316b8a5fe
  resourceVersion: "4864261"
  selfLink: /api/v1/namespaces/default/secrets/obc-schmaustech
  uid: eda5cd99-dc57-4c7b-acf3-377343d6fef8
type: Opaque

The access and secret keys are base64 encoded so we need to ensure we gather them from a decoded perspective.  As we did with the bucket name and bucket host we will assign them to variables.  First lets pull out the access key from the yaml, decode it and then assign it to a variable and confirm the variable has the access key content:

$ AWS_ACCESS_KEY_ID=`oc get secret obc-schmaustech -o yaml|grep -m1 AWS_ACCESS_KEY_ID|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_ACCESS_KEY_ID
Ws6NjFueKwucoehLvUMJ

We will do the same for the secret key and verify again:

$ AWS_SECRET_ACCESS_KEY=`oc get secret obc-schmaustech -o yaml|grep -m1 AWS_SECRET_ACCESS_KEY|cut -d: -f2|tr -d " "| base64 -d`
$ echo $AWS_SECRET_ACCESS_KEY
rN/90Z3cdebo9RKA+ZbPl+uoyfNbai7K49NJDUJj

Now that we have our four variables which contain the values for the bucket name, bucket host, access key and secret key we are now ready to create our thanos-object-storage resource yaml file which we will need to start the configuration and deployment of the Red Hat Advanced Cluster Management Observability component.  This file provides the observability service the information about the S3 object storage.   Below is how we can create the file noting that the variables will substitute in the values for resource definition:

$ cat << EOF > ~/thanos-object-storage.yaml
apiVersion: v1
kind: Secret
metadata:
  name: thanos-object-storage
type: Opaque
stringData:
  thanos.yaml: |
    type: s3
    config:
      bucket: $BUCKET_NAME
      endpoint: $BUCKET_HOST
      insecure: false
      access_key: $AWS_ACCESS_KEY_ID
      secret_key: $AWS_SECRET_ACCESS_KEY
      trace:
        enable: true
      http_config:
        insecure_skip_verify: true
EOF

Once we have the definition created we can go ahead and create the open-cluster-management-observability namespace:

$ oc create namespace open-cluster-management-observability
namespace/open-cluster-management-observability created

Next we want to assign the clusters pull-secret to the docker config json variable:

$ DOCKER_CONFIG_JSON=`oc extract secret/pull-secret -n openshift-config --to=-`
# .dockerconfigjson

At this point we can go ahead and create the thanos-object-storage resource from the yaml file we created:

$ oc create -f thanos-object-storage.yaml -n open-cluster-management-observability
secret/thanos-object-storage created

Once the thanos-object-storage resource is created we can create a multiclusterobservability resource yaml file like the example below.  Notice that it references the thanos-object-storage resource we created above:

$ cat << EOF > ~/multiclusterobservability_cr.yaml
apiVersion: observability.open-cluster-management.io/v1beta1
kind: MultiClusterObservability
metadata:
  name: observability #Your customized name of MulticlusterObservability CR
spec:
  availabilityConfig: High             # Available values are High or Basic
  imagePullPolicy: Always
  imagePullSecret: multiclusterhub-operator-pull-secret
  observabilityAddonSpec:              # The ObservabilityAddonSpec is the global settings for all managed clusters
    enableMetrics: true                # EnableMetrics indicates the observability addon push metrics to hub server
    interval: 60                       # Interval for the observability addon push metrics to hub server
  retentionResolution1h: 5d            # How long to retain samples of 1 hour in bucket
  retentionResolution5m: 3d
  retentionResolutionRaw: 1d
  storageConfigObject:                 # Specifies the storage to be used by Observability
    metricObjectStorage:
      name: thanos-object-storage
      key: thanos.yaml
EOF

We can cat out the file to confirm it looks correct:

$ cat multiclusterobservability_cr.yaml
apiVersion: observability.open-cluster-management.io/v1beta1
kind: MultiClusterObservability
metadata:
  name: observability #Your customized name of MulticlusterObservability CR
spec:
  availabilityConfig: High             # Available values are High or Basic
  imagePullPolicy: Always
  imagePullSecret: multiclusterhub-operator-pull-secret
  observabilityAddonSpec:              # The ObservabilityAddonSpec is the global settings for all managed clusters
    enableMetrics: true                # EnableMetrics indicates the observability addon push metrics to hub server
    interval: 60                       # Interval for the observability addon push metrics to hub server
  retentionResolution1h: 5d            # How long to retain samples of 1 hour in bucket
  retentionResolution5m: 3d
  retentionResolutionRaw: 1d
  storageConfigObject:                 # Specifies the storage to be used by Observability
    metricObjectStorage:
      name: thanos-object-storage
      key: thanos.yaml

At this point we can double check that nothing is running under the open-cluster-management-observability namespace:

$ oc get pods -n open-cluster-management-observability
No resources found in open-cluster-management-observability namespace.


Once confirmed there are no resource running we can apply the multiclusterobservability resource file we created to start the deployment of the observability components:

$ oc apply -f multiclusterobservability_cr.yaml
multiclusterobservability.observability.open-cluster-management.io/observability created

It will take a few minutes for the associated pods to come up but after a few minutes if we look for the pods under the open-cluster-management-observability namespace we should see the following:

$  oc get pods -n open-cluster-management-observability
NAME                                                              READY   STATUS    RESTARTS   AGE
alertmanager-0                                                    2/2     Running   0          97s
alertmanager-1                                                    2/2     Running   0          73s
alertmanager-2                                                    2/2     Running   0          57s
grafana-546fb568b4-bqn22                                          2/2     Running   0          97s
grafana-546fb568b4-hxpcz                                          2/2     Running   0          97s
observability-observatorium-observatorium-api-85cf58bd8d-nlpxf    1/1     Running   0          74s
observability-observatorium-observatorium-api-85cf58bd8d-qtm98    1/1     Running   0          74s
observability-observatorium-thanos-compact-0                      1/1     Running   0          74s
observability-observatorium-thanos-query-58dc8c8ccb-4p6l8         1/1     Running   0          74s
observability-observatorium-thanos-query-58dc8c8ccb-6tmvd         1/1     Running   0          74s
observability-observatorium-thanos-query-frontend-f8869cdf66c2c   1/1     Running   0          74s
observability-observatorium-thanos-query-frontend-f8869cdfstwrg   1/1     Running   0          75s
observability-observatorium-thanos-receive-controller-56c9x6tt5   1/1     Running   0          74s
observability-observatorium-thanos-receive-default-0              1/1     Running   0          74s
observability-observatorium-thanos-receive-default-1              1/1     Running   0          56s
observability-observatorium-thanos-receive-default-2              1/1     Running   0          37s
observability-observatorium-thanos-rule-0                         2/2     Running   0          74s
observability-observatorium-thanos-rule-1                         2/2     Running   0          49s
observability-observatorium-thanos-rule-2                         2/2     Running   0          32s
observability-observatorium-thanos-store-memcached-0              2/2     Running   0          74s
observability-observatorium-thanos-store-memcached-1              2/2     Running   0          70s
observability-observatorium-thanos-store-memcached-2              2/2     Running   0          66s
observability-observatorium-thanos-store-shard-0-0                1/1     Running   0          75s
observability-observatorium-thanos-store-shard-1-0                1/1     Running   0          74s
observability-observatorium-thanos-store-shard-2-0                1/1     Running   0          75s
observatorium-operator-797ddbd9d-kqpm6                            1/1     Running   0          98s
rbac-query-proxy-769b5dbcc5-qprrr                                 1/1     Running   0          85s
rbac-query-proxy-769b5dbcc5-s5rbm                                 1/1     Running   0          91s

With the pods running under the open-cluster-management-observability namespace we can now confirm that the observability service is running by logging into the Red Hat Advanced Cluster Management console and going to observe environments.  In the upper right hand corner of the screen a Grafana link should now be present like in the screenshot below: 


Once you click on the Grafana link the following observability dashboard will appear and it even already be showing metrics from the cluster collections:


If we click on the CPU metric we can even see the breakdown of what is using the CPU of the local-cluster:


At this point we can conclude that the Red Hat Advanced Cluster Management Observability component is installed successfully and using the Noobaa S3 object bucket we created.