With the GA of Red Hat OpenShift Container Platform 4.8 the following additional functions are supported on IBM Z and LinuxONE.
What is New for Z on 4.8:
- Allows to setup a cluster with only three nodes.
- The nodes are control nodes that contain the compute pods and therefore combine both types of nodes.
- To accommodate the additional load of the pods, which run on compute nodes in a 5+ node scenario, the memory footprint of the control plane nodes is increased to 21 GB each as well as using 6vCPUs. This results in the same overall memory footprint as a five-node cluster.
- We successfully tested to setup a Three-node Cluster with 3 vCPUs and 10GB Memory per node. Which is seen as a bare minimum. Therefore adjusting the resources would be required to your needs.
- Overall minimum 3 IFLs+SMT-2 enabled is required. For a Production setup 6IFLs + SMT-2 might be more appropriate to have enough capacity for the workload. But as with the minimum, the resources need to adjusted to the workload needs.
- Supporting this feature on IBM Z allows for a more simplified setup for our customers. The three-node clusters are suited for test/dev environments for a limited amount of containers but require a full cluster. They are also suitable for specialized clusters in edge scenarios. See also this RedHat blog post https://www.openshift.com/blog/delivering-a-three-node-architecture-for-edge-deployments
The basic step to use this feature is to ensure that the number of compute replicas is set to 0 in the install-config.yaml file
- name: worker
- By default RHOCP uses the internal Elasticsearch log store for its logs if you deploy the cluster logging stack. However, by default audit data is not stored. To forward the audit logs along with application and infrastructure logs to the internal Elasticsearch instance, create a ClusterLogForwarder pipeline with outputRefs:default.
- You can also forward logs to an external third-party system running either on s390x or on x86. Possible (tested) options are Elasticsearch, syslog, kafka and fluentd.
- You can set up a single log store for all your clusters.
- Apart from storing all cluster logs in a single log store, you can also drop or skip the logs according to its type. For example, you can:
- Forward only application logs (user-workload logs) to the external log store and the rest of the logs are automatically dropped.
- Forward logs of specific project/namespace to the external log store and the rest of the logs are automatically dropped.
- Forward audit logs (generated by Kube and OpenShift API) to the external log store and rest of the logs are automatically dropped.
- Forward infrastructure logs (cluster specific only) to the external log store and rest of the logs are automatically dropped.
- You also have an option to forward logs to different external log stores using a single configuration file. For example, a conf file can be configured as:
- audit logs to syslog, infrastructure logs to kafka, application logs to fluentd and any specific project to elasticsearch
- You can define labels according to the source of logs.
- Logs can be filtered according to labels, for example:
- prod application logs - "app : prod"
- dev application logs - "app : dev"
- infra logs - "cluster-name : infra"
- By forwarding logs to external tools, you can use any log visualization tool like grafana etc., which supports log-based data. As of now RHOCP comes with Kibana (default) only.
- Documentation: https://docs.openshift.com/container-platform/4.8/logging/cluster-logging-external.html
When you enable encrypted etcd, a specific set of the data stored in etcd will be encrypted (Secrets, Config Maps, Routes, OAuth access tokens, OAuth authorized tokens). On IBM Z this feature was verified using the CPACF crypto facility and the encryption is accelerated via hardware implementation.
Link Documentation: https://docs.openshift.com/container-platform/4.8/security/encrypting-etcd.html
4K FCP support
With the updated version of Red Hat CoreOS, you can now use 4K FCP block devices as root filesystems. Which allows to use native 4K block size for Flash Storage servers.
RHOCP installation that is based on KVM for static ip and disconnected install:
- Full connected installation:
- Disconnected install, including fast-track and full installation:
Passthrough for dasd on KVM (Tech Preview)
With RHEL 8.4, KVM for DASD passthrough was included but not yet supported. Verification has shown that DASD passthrough is working as expected. For RHOCP the disk appears in the the KVM Guest as a pure ECKD device and installs accordingly.
Fixed systemd units ordering/dependencies.
Fixed installation on DM devices (mpatha, dm-N).
Fixed issue with finding kernel config on s390x.
Due to a problem in the initrd, boot on z15 was impacted and a dfltcc=off was needed im parmline. This is no longer required and the hardware compression can be used to unpack the initrd.
Support on KVM to install on virtual disks based on DASD. This was not possible and is fixed.