Open Source for IBM Z and LinuxONE

Open Source for IBM Z and LinuxONE

Open Source for IBM Z and LinuxONE

Your one-stop destination to share, collaborate and learn about the latest innovations in open source software for s390x processor architecture (IBM Z and LinuxONE)

 View Only

What is New in RedHat OpenShift 4.8 on Z and LinuxONE

By Holger Wolf posted Wed August 04, 2021 09:03 AM

  
With the GA of Red Hat OpenShift Container Platform 4.8 the following additional functions are supported on IBM Z and LinuxONE.

What is New for Z on 4.8:

Three-node cluster

The basic step to use this feature is to ensure that the number of compute replicas is set to 0 in the install-config.yaml file

compute:
- name: worker
  platform: {}
  replicas: 0



Log Forwarding


  • By default RHOCP uses the internal Elasticsearch log store for its logs if you deploy the cluster logging stack. However, by default audit data is not stored. To forward the audit logs along with application and infrastructure logs to the internal Elasticsearch instance, create a ClusterLogForwarder pipeline with outputRefs:default.
  • You can also forward logs to an external third-party system running either on s390x or on x86. Possible (tested) options are Elasticsearch, syslog, kafka and fluentd. 
  • You can set up a single log store for all your clusters.
  • Apart from storing all cluster logs in a single log store, you can also drop or skip the logs according to its type. For example, you can: 
    • Forward only application logs (user-workload logs) to the external log store and the rest of the logs are automatically dropped.
    • Forward logs of specific project/namespace to the external log store and the rest of the logs are automatically dropped.
    • Forward audit logs (generated by Kube and OpenShift API) to the external log store and rest of the logs are automatically dropped.
    • Forward infrastructure logs (cluster specific only) to the external log store and rest of the logs are automatically dropped.
  • You also have an option to forward logs to different external log stores using a single configuration file. For example, a conf file can be configured as:
    • audit logs to syslog, infrastructure logs to kafka, application logs to fluentd and any specific project to elasticsearch
  • You can define labels according to the source of logs.
    • Logs can be filtered according to labels, for example: 
      • prod application logs - "app : prod"
      • dev application logs - "app : dev"
      • infra logs - "cluster-name : infra"
  • By forwarding logs to external tools, you can use any log visualization tool like grafana etc., which supports log-based data. As of now RHOCP comes with Kibana (default) only. 
  • Documentation: https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-external.html

Encrypted ETCD


When you enable encrypted etcd, a specific set of the data stored in etcd will be encrypted (Secrets, Config Maps, Routes, OAuth access tokens, OAuth authorized tokens). On IBM Z this feature was verified using the CPACF crypto facility and the encryption is accelerated via hardware implementation.

Link Documentation: https://docs.openshift.com/container-platform/4.8/security/encrypting-etcd.html 

4K FCP support


With the updated version of Red Hat CoreOS, you can now use 4K FCP block devices as root filesystems. Which allows to use native 4K block size for Flash Storage servers. 

Updated Documenation 


RHOCP installation that is based on KVM for static ip and disconnected install: 

Hypervisor

Passthrough for dasd on KVM (Tech Preview)


With RHEL 8.4, KVM for DASD passthrough was included but not yet supported. Verification has shown that DASD passthrough is working as expected. For RHOCP the disk appears in the the KVM Guest as a pure ECKD device and installs accordingly. 

Fixes

Multipath 


Fixed systemd units ordering/dependencies. 
Fixed installation on DM devices (mpatha, dm-N).

NFD


Fixed issue with finding kernel config on s390x.

CoreOS


Due to a problem in the initrd, boot on z15 was impacted and a dfltcc=off was needed im parmline. This is no longer required and the hardware compression can be used to unpack the initrd.
Support on KVM to install on virtual disks based on DASD. This was not possible and is fixed.
0 comments
6 views

Permalink