Authored by: Ayana Rukasar, Jitendra Singh, Rishika Kedia, Holger Wolf, Niha Tahoor M
Introduction
This guide provides a step-by-step walkthrough for deploying the LokiStack for logging in Red Hat® OpenShift® clusters on IBM Z® and® IBM LinuxONE, specifically for users testing Loki versions greater than 6.0. It covers the setup of various components, including operators, storage classes, secrets, and log forwarding. Follow these instructions carefully to successfully deploy and configure LokiStack in your Red Hat OpenShift environment on IBM Z and IBM® LinuxONE.
Prerequisites:
Before starting, ensure the following components are installed:
1. Loki Operator – Installed through the Red Hat OpenShift OperatorHub.
2. Logging Operator – For managing the Red Hat OpenShift logging resources.
3. Local Storage Operator – For configuring local storage classes.
4. Cluster Observability Operator (COO) – For managing log and metrics collection and visualization.
5. Access to a Loki-compatible storage backend like MinIO or Amazon S3.
1 Install Loki and Logging Operators on Red Hat OpenShift:
Begin the installation, go to the web console and navigate to Operators > OperatorHub.

1.1 Install Loki Operator:
· Search for Loki Operator.
· Click Install.
· Go to Installed Operators and verify Loki Operator is listed.
1.2 Install Logging Operator:
· Search for Logging Operator.
· Click Install.
· Go to Installed Operators and verify Logging Operator is listed.
Note: In the Installed Operators section, you will see the Cluster Log Forwarder (CLF) option. We will use this later in the guide to configure log forwarding to Loki.
1.3 Types of logs:
In Red Hat OpenShift, logs are categorized into three main types:
1. Application Logs: These are generated by the user applications running in the cluster, but they exclude logs from infrastructure container applications.
2. Infrastructure Logs: These logs come from infrastructure namespaces like Red Hat OpenShift, kube, or default. They also include journald messages from nodes.
3. Audit Logs: These logs are created by the auditd system, which tracks activity on the node. They also include logs from services like kube-apiserver, openshift-apiserver, and the OVN project, if enabled.
2 Set Up Storage Class:
The Local Storage Operator (LSO) is important for handling local storage in Red Hat OpenShift by automatically creating and managing Persistent Volumes (PVs) from disks attached directly to worker nodes.
2.1 Install Local Storage Operator:
· Search for LocalStorage and install it with default settings.
· Click Install.
· Go to Installed Operators and verify Local Storage Operator is listed.
2.2 Deploying a LokiStack Provisioner in Red Hat OpenShift:
1. Set up an NFS provisioner in Red Hat OpenShift: here
2. Run the script: ./nfs.sh
2.3 Verify Persistent Volumes (PVs):
· Check if the PVs are created:
oc get pv
· Verify that the PVs are in the Available state and associated with the NFS Storage Class.
3. Create the Loki Secret:
Automated MinIO Setup: here
1. Now, it's time to set up the object storage for Loki, which could be AWS, MinIO, GCP, or another similar service. To do this, you will need to create a secret. You can either do this through the Red Hat OpenShift web console or by directly applying the YAML configuration.
2. Make sure to update the access_key_id and access_key_secret fields with your credentials. Also, customize the bucketnames, endpoint, and region fields to match your object storage location.
3. Open the `create-loki-secret.sh` file and update it with your MinIO credentials:
oc create secret generic credsecret -n openshift-logging \
--from-literal=endpoint="xxx.xxx.xxx.xxx:xxxx" \
--from-literal=region="us-east-1" \
--from-literal=bucketnames="loki" \
--from-literal=access_key_id="XXXXXXXXXXXXXXXXXXXXXXXX" \
--from-literal=access_key_secret="XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
Verfiy Secret:
4. oc get secret credsecret
4 Deploy LokiStack
LokiStack is the logging system that collects, stores, and manages logs in Red Hat OpenShift. These steps will set up LokiStack and connect it to MinIO for storage.
4.1 Set Up LokiStack:
1. 1. The loki-instance.yaml file sets up and deploys LokiStack. It also configures Loki to store logs in MinIO using the previously created secret (credsecret).
{
"apiVersion": "loki.grafana.com/v1",
"kind": "LokiStack",
"metadata": {
"name": "lokistack-sample"
},
"spec": {
"managementState": "Managed",
"replicationFactor": 1,
"size": "1x.demo",
"storage": {
"secret": {
"name": "credsecret",
"type": "s3"
}
},
"storageClassName": "nfs",
"tenants": {
"mode": "openshift-logging"
}
}
}
2. Apply the configuration:
oc apply -f loki-instance.yaml
3. Verify the LokiStack pods:(if in Pending state, edit configuration):
oc get pod
Note: Editing the LokiStack configuration is only required if modifications are needed (e.g., for adjusting storage settings or log retention). If no changes are necessary, this step can be skipped.
5 Setup ServiceAccount and Permissions
Loki (or any log collector) needs permission to gather logs from the Red Hat OpenShift cluster. These steps create a ServiceAccount and give it the necessary access for log collection.
5.1 Create a ServiceAccount:
· Creates a ServiceAccount named logcollector inside the openshift-logging project.
· This account will be used by Loki to access logs.
oc project openshift-logging
oc create sa logcollector
5.2 Grant Permissions:
Assigns cluster roles to logcollector so it can collect logs:
· Application logs
· Infrastructure logs
· Audit logs
· Write permissions for the logging collector
oc adm policy add-cluster-role-to-user collect-application-logs -z logcollector
oc adm policy add-cluster-role-to-user collect-infrastructure-logs -z logcollector
oc adm policy add-cluster-role-to-user collect-audit-logs -z logcollector
oc adm policy add-cluster-role-to-user logging-collector-logs-writer -z logcollector
Note: Without these permissions, Loki cannot collect logs from Red Hat OpenShift. This step ensures that Loki has proper access to gather and store logs efficiently.
6 Create a Cluster Log Forwarder
The Cluster Log Forwarder (CLF) tells Red Hat OpenShift where to send logs, which in this case is Loki. This step sets up Red Hat OpenShift to forward logs to Loki.
6.1 Configure Cluster Log Forwarder:
· Go to the Red Hat OpenShift console → Operators > Installed Operators.
· Click on Logging Operator → Create a Cluster Log Forwarder.
· Click on Cluster Log Forwarder → Click on Create a CLF
· Switch to YAML View and paste the given YAML.
apiVersion: observability.openshift.io/v1
kind: ClusterLogForwarder
metadata:
name: collector
namespace: openshift-logging
spec:
collector:
resources: {}
outputs:
- lokiStack:
authentication:
token:
from: serviceAccount
target:
name: lokistack-sample
namespace: openshift-logging
name: lokistack
tls:
ca:
configMapName: openshift-service-ca.crt
key: service-ca.crt
type: lokiStack
pipelines:
- inputRefs:
- application
- audit
- infrastructure
name: default-lokistack
outputRefs:
- lokistack
serviceAccount:
name: logcollector
· Click on Create
· It should look something like this:
6.2 Check Pods (if in Pending state, wait till it comes to running state):
oc get pod
Note: Without a Cluster Log Forwarder, Red Hat OpenShift won't send logs to Loki. This step ensures logs are automatically collected and stored in LokiStack for monitoring and analysis.
7 Deploy Cluster Observability Operator (COO)
7.1 Install COO:
· Install Cluster Observability Operator from the OperatorHub with default settings.
· Click Install.
· Go to Installed Operators and verify COO Operator is listed.
7.2 Create the COO Logging UI Plugin:
· · Enables a Logging UI plugin that allows viewing and managing logs from the Red Hat OpenShift web console.
· · This allows you to access logs from LokiStack directly within the Red Hat OpenShift console.
How to enable:
· Apply the provided YAML to enable the Logging UI plugin.
oc apply -f - <<EOF
apiVersion: observability.openshift.io/v1alpha1
kind: UIPlugin
metadata:
name: logging
spec:
type: Logging
logging:
logsLimit: 10
lokiStack:
name: lokistack-sample
timeout: 5m
EOF
· After applying, click on Web Console Refresh when prompted.
8 Verify Log Visualization
This step ensures that logs are properly being collected and displayed in the Red Hat OpenShift console.
8.1 Check Logs in Red Hat OpenShift Console:
· Opens the Logs tab under Observe in the Red Hat OpenShift console.
· Verifies that the logs collected by Loki are visible and can be accessed in the web interface.
With these steps, you should be able to successfully transition to Loki for logging in your Red Hat OpenShift cluster, enabling better log aggregation and visualization with the Cluster Observability Operator (COO).
Conclusion:
By transitioning to Loki for logging in Red Hat OpenShift on IBM Z and IBM® LinuxONE, you can achieve a more efficient and scalable log management solution. This guide walked you through the installation and configuration of Loki, Logging Operator, Local Storage Operator, and Cluster Observability Operator. With Loki connected to MinIO or Amazon S3 for storage, and logs forwarded via the Cluster Log Forwarder, you now have a robust logging system in place. The COO further enhances log visualization, making it easier to monitor and analyze logs in your Red Hat OpenShift environment on IBM Z and IBM® LinuxONE.