Cloud Pak for Integration

 View Only

Enabling logging in Cloud Pak for Integration 2021.1

By Julian Clinton posted Wed April 14, 2021 02:00 PM

  
In this blog, we will describe how to enable logging for IBM Cloud Pak For Integration (CP4I) 2021.1. In the example, we are using a RedHat Openshift Kubernetes Service (ROKS) cluster. However, these steps are applicable to other OpenShift services.

Prerequisites


  • you need to have the OpenShift CLI oc command installed on your local machine. See Getting started with the OpenShift CLI
  • you are logged into your OpenShift cluster as a user with cluster administration privileges i.e. the cluster-admin role
We will also assume you have installed Cloud Pak for Integration (CP4I).

Finally we will use an IBM Message Queue (MQ) instance to test the logging. You should:

  • install the MQ operator as described here
  • create an MQ instance as described either here or here

Once the MQ instance has been created, you should be able to see it on the CP4I home page:
CP4I homepage showing an MQ instance


Click on the menu button in the top-left corner and then browse to Integration runtimes:
CP4I homepage navigation menu showing Integration runtimes

You should then see a resource table showing your MQ instance:
Integration runtimes resource table showing MQ instance


You should see that the "Logging" menu option in the resource table is disabled:

The following section will tell you how to install logging which will in turn cause the menu option to become enabled.

Installing Logging

To install and enable logging, we will be following the OpenShift 4.6 instructions. This can be done either through the OpenShift Web Console or the OpenShift CLI.

The installation process in the instructions has 3 steps. We won't repeat the instructions here but we will verify each step once you completed it.
Step 1: Installing the Elasticsearch Operator
Important: make sure you check the box to Enable operator recommended cluster monitoring on this namespace:

Once you have followed step 1, verify the correct openshift.io/cluster-monitoring=true label has been applied to the openshift-operators-redhat namespace. You can do this via the OpenShift Console sidebar Home -> Projects and selecting the openshift-operators-redhat namespace:

You can also do this via the CLI with:
oc describe namespace openshift-operators-redhat
and checking the "Labels":
Name:        openshift-operators-redhat
Labels: olm.operatorgroup.uid/7434e6f2-264c-40dc-a89c-f35f80467856=
openshift.io/cluster-monitoring=true
...

Step 2: Installing the Cluster Logging Operator
Important: as with the previous step, make sure you check the box to Enable operator recommended cluster monitoring on this namespace

Once you have followed step 2, verify the correct openshift.io/cluster-monitoring=true label has been applied to the openshift-logging namespace. As before, you can do this via the OpenShift Console sidebar Home -> Projects and selecting the openshift-logging namespace:

or via the CLI with:
oc describe namespace openshift-logging

and checking the "Labels":
Name:        openshift-logging
Labels: olm.operatorgroup.uid/7434e6f2-264c-40dc-a89c-f35f80467856=
olm.operatorgroup.uid/80111da1-08cf-4043-b9b2-e71488852822=
openshift.io/cluster-monitoring=true
...

Step 3: Creating a Cluster Logging instance
Step 3 in the OpenShift logging instructions contains an example custom resource (CR) YAML file. The CR YAML we're using to create our logging instance is based on that and is shown below:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
retentionPolicy:
application:
maxAge: 1d
infra:
maxAge: 7d
audit:
maxAge: 7d
elasticsearch:
nodeCount: 2
storage:
storageClassName: "ibmc-block-gold"
size: 20G
resources:
requests:
memory: "8Gi"
proxy:
resources:
limits:
memory: 256Mi
requests:
memory: 256Mi
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
replicas: 1
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
collection:
logs:
type: "fluentd"
fluentd: {}

Note that we have made the following changes to the example in the OpenShift documentation:
  • we have chosen to use ibmc-block-gold as the storageClassName because this is available on our ROKS cluster. If needed, you should replace this with an appropriate block storage class for your cluster. For example, on AWS you may want to use gp2.
  • we have reduced the required storage size from 200G to 20G for demo resource purposes
  • we have reduced nodeCount from 3 to 2, again for demo resource purposes

Confirming Logging Availability

It can take several minutes for the logging instance to come up. You can check on progress with:
oc get deployment -n openshift-logging
You may see:
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
cluster-logging-operator 1/1 1 1 53m
elasticsearch-cdm-w3tqa1bo-1 0/1 1 0 83s
elasticsearch-cdm-w3tqa1bo-2 0/1 1 0 81s
kibana 1/1 1 1 79s

This shows that neither of the two elasticsearch pods we requested are currently available.

After a few minutes, you should eventually see that they are all ready:
NAME                           READY   UP-TO-DATE   AVAILABLE   AGE
cluster-logging-operator 1/1 1 1 56m
elasticsearch-cdm-w3tqa1bo-1 1/1 1 1 3m52s
elasticsearch-cdm-w3tqa1bo-2 1/1 1 1 3m50s
kibana 1/1 1 1 3m48s

Once these are ready, you should see that the "Logging" menu option in the Integration runtimes resource table is now enabled:


When you click the "Logging" menu option, you will see that the MQ instance logs can be viewed:

Note that it can take a few minutes for logs to come through.

#IBMCloudPakforIntegration(ICP4I)
#Openshift
#logging
0 comments
38 views

Permalink