z/OS Container Extensions (zCX)

z/OS Container Extensions (zCX)

z/OS Container Extensions (zCX)

Deploying Linux on Z containerized applications inside z/OS

 View Only

Using z/OS NFS Server as shared persistent storage with zCX for Red Hat OpenShift

By DEEPTI NAPHADE posted Mon October 14, 2024 12:18 PM

  

Introduction: 

Red Hat OpenShift Container platform supports Network File System (NFS) as remote shared persistent storage volumes.  This capability allows remote hosts to mount file systems over a network, and interact with those file systems as though they are mounted locally.  Containerized applications running inside zCX for OpenShift cluster can now leverage z/OS NFS server exported file systems to store and share stateful application data. 

By leveraging the z/OS NFS server to persist data, containerized applications deployed inside zCX for Red Hat OpenShift Cluster can take advantage of the z/OS security controls and operational benefits to securely store, access, backup and restore application data.  

Additionally, clients can also extend z/OS qualities of service to the persisted data by integrating the management of data into existing z/OS operational procedures. In addition, using this method, existing z/OS data can also be exported and shared with containerized applications running inside zCX for Red Hat OpenShift cluster without duplicating the data.

For additional information, please see https://www.ibm.com/docs/en/zos/2.5.0?topic=pz-using-zos-nfs-server-as-persistent-storage-zcx 

Overview: 

The figure 1 shows an Application Transparent Transport Layer Security( AT-TLS )aware z/OS NFS Server (zNFSS) setup on a z/OS LPAR. The z/OS NFS Server can be made AT-TLS aware by APAR OA62357 and NFSv4.  Additionally, configure the AT-TLS policy on the z/OS NFS server to allow traffic from all the zCX for OpenShift Cluster compute nodes DVIPA IP addresses. 

The HAPROXY container image is used in this example to enable the TLS configuration on the client side.   Obtain the s390x/haproxy container image from the IBM Z and LinuxONE Container Registry, then configure and deploy on the zCX for OpenShift cluster where you plan to run containerized workloads that need to access to exported data sets using z/OS NFS server. The haproxy image can be found on IBM Z and LinuxONE Container Registry:  https://ibm.github.io/ibm-z-oss-hub/main/main.html The HAPROXY which routes the encrypted traffic to the z/OS NFS server is run as a container on the OpenShift Container platform in its own project.  The HAPROXY pods expose all the ports 2043-2049 used by z/OS NFS server. An OpenShift ‘Service’ of type ClusterIP is setup (see Figure 1. marked as ‘nfs-server’) for the haproxy container. This ‘Service’ is not exposed (no route setup) externally, and is therefore only accessible from within the cluster for internal network communication.  This service acts as a load balancer to the HAPROXY pods. This ‘nfs-server’ service, maps the service ports (2043-2049) to the haproxy container ports (2043-2049).  

The haproxy configuration file specifies the backend as the z/OS NFS server host IP address (z/OS IP address). It also specifies to verify the certificate-authority certificate and the client certificate. The certificates are added as Red Hat OpenShift Cluster secrets inside the haproxy project to be accessible to the haproxy container. See Fig 5. sample haproxy.cfg file below. 

The Persistent Volumes(PV) are to be created by the OpenShift administrator of type ‘nfs’ and with the server as the ‘ClusterIP’ of the haproxy service and with a storage class name. 

The application workload using the external NFS persistent storage runs in its own project. The Persistent Volume Claim(PVC ) specifies the storage class to match the PV. When the application container is started, the mount request to the haproxy service ClusterIP is routed to the haproxy instance by the haproxy service which then routes it based on the haproxy.cfg to the NFS server IP. 

For the containerized OpenShift workloads to be able to securely access NFS resources on z/OS, the z/OS NFS server exploits the AT-TLS support provided by z/OS Communications Server. With the NFS server being AT-TLS aware, the AT-TLS policy on the NFS server host needs NFS client that is the zCX for OpenShift compute node, present certificate owned by a z/OS user ID, which is then used to perform an implicit mvslogin action for each container user ID trying to access NFS data from each zCX for OpenShift compute node. HAProxy  can be used to enable TLS on client side of the connection( zCX for OpenShift container compute node) 

This solution not only makes the authentication of multiple container user IDs to access NFS data simpler, but provides the additional benefit of an encrypted data connection between the zCX for OpenShift compute nodes and the NFS server host.

Fig 1: OCP workload using z/OS NFS external storage running on same node as the haproxy and AT-TLS aware z/OS NFS server. 

Steps:

Obtain the icr.io access token 

Log in to IBM Container Registry (icr.io) in order to pull the bastion helper image. 

  • Login to cloud.ibm.com
  • Navigate through Manage > Access (IAM) and view “My IBM Cloud API Keys” in the bottom-center of the page to obtain your API key
  • Issue $ docker login icr.io -u iamapikey -p your_api_key
  • Verify Login Succeeded is returned from the docker login command 

z/OS NFS Server requirements: 

·        This support requires APAR OA62357 to make the z/OS NFS server an AT-TLS-aware application. The AT-TLS support for the z/OS NFS server supports only the NFSv4 protocol.  

·        Ensure the z/OS NFS server has a security attribute setting of safexp. IBM has not tested and does not recommend other NFS security attributes. 

·        Export all directories to be mounted inside the container. 

z/OS NFS server host system configuration: 

The TCP/IP stack on z/OS must have AT-TLS enabled. Use TTLS in the TCPCONFIG statement in PROFILE.TCPIP. 

z/OS NFS CERTAUTH and Server Certificate generation: 

Do the following: 

1.     Create or obtain a CERTAUTH certificate with KEYUSAGE(CERTSIGN).  

2.     Create a personal certificate owned by the z/OS NFS server ID with KEYUSAGE(HANDSHAKE DATAENCRYPT). Sign this certificate with the CERTAUTH from step 1. 

3.     Create a keyring owned by the z/OS NFS server ID. 

4.     Connect the CERTAUTH certificate from step 1 to the keyring from step 3. 

5.     Connect the personal certificate from step 2 to the keyring from step 3 with a value of DEFAULT. 

6.     Export the CERTAUTH from step 1 to a sequential data set with FORMAT(CERTB64). This dataset will be made available using OpenShift secret inside the HAPROXY container, that is intended to enable TLS connection to the z/OS NFS server. 

7.     Create a personal certificate for the user ID that will be authenticating to the z/OS NFS server as a containerized workload running on cluster built using zCX for OpenShift. This ID should only have access to the datasets and directories exported by the z/OS NFS server that are intended for access within containerized workloads.  

8.     Export the personal certificate created in the previous step to a sequential dataset with FORMAT(PKCS12DER). This data will be made available using OpenShift secret inside the HAPROXY container, that will be accessing z/OS NFS server exported filesystem and will be used both for authentication and authorization. 

9.     Refresh the DIGTCERT and DIGTRING classes. 

 

RACDCERT CERTAUTH GENCERT + 
           SUBJECTSDN( + 
             CN('CA Cert for zCX HAPROXY') ) + 
           SIZE(2048) + 
           WITHLABEL('CA Cert for zCX HAPROXY') + 
           KEYUSAGE(CERTSIGN) 
 
  RACDCERT ID(MVSNFS8) GENCERT + 
           SUBJECTSDN( + 
             CN('NFS Server Cert') ) + 
           SIZE(2048) + 
           WITHLABEL('NFS Server Cert') + 
           KEYUSAGE(HANDSHAKE DATAENCRYPT) + 
     SIGNWITH(CERTAUTH LABEL('CA Cert for zCX HAPROXY') ) 
 
  RACDCERT CERTAUTH + 
           EXPORT(LABEL('CA Cert for zCX HAPROXY')) + 
           DSN('DNAPH.ZCX.HAPROX6.CERTAUTH') + 
           FORMAT(CERTB64) 
 
  RACDCERT ID(MVSNFS8) ADDRING(ZCX-HAPROXY-RING) 
 
  RACDCERT ID(MVSNFS8) + 
     CONNECT(CERTAUTH LABEL('CA Cert for zCX HAPROXY') + 
           RING(ZCX-HAPROXY-RING) ) 
 
  RACDCERT ID(MVSNFS8) + 
           CONNECT(LABEL('NFS Server Cert') + 
           RING(ZCX-HAPROXY-RING) + 
           DEFAULT ) 
  SETROPTS RACLIST(DIGTCERT DIGTRING) REFRESH 
/* 

 

Figure 2 shows sample JCL to set up the CERTAUTH cert and z/OS NFS server certificate. It assumes that the z/OS NFS server process is MVSNFS8. Use the same certificate names to set up the certificate when using the AT-TLS policy. 

 

RACDCERT ID(NFSTST) GENCERT +                                     
           SUBJECTSDN( +                                          
             CN('NFSTST Private Cert') ) +                         
           SIZE(2048) +                                           
           RSA +                                                  
           WITHLABEL('NFSTST Private Cert') +                      
      SIGNWITH(CERTAUTH LABEL('CA Cert for zCX HAPROXY') )        
                                                                  
  RACDCERT ID(NFSTST) +                                            
           EXPORT(LABEL('NFSTST Private Cert') ) +                 
           DSN('DNAPH.ZCX.DNAP4.P12DER') +                        
           FORMAT(PKCS12DER) +                                    
           PASSWORD('test12')                                   
                                                                  
  SETROPTS RACLIST(DIGTCERT DIGTRING) REFRESH                     
/*  

 

Figure 3 shows sample JCL for a client certificate signed by CERTAUTH generation steps. 

 

Using above JCL, the certificates were exported to following datasets: 

 

 
DNAPH.ZCX.HAPROX6.CERTAUTH 
DNAPH.ZCX.DNAP4.P12DER 


 

From the z/OS NFS Server system: 

DNAPH@NP8:/home/dnaph/certs>sftp dnaph@localhost 
Connected to localhost. 
 
sftp> !cp "//'DNAPH.ZCX.DNAP4.P12DER'" client-cert.der 
sftp> !cp "//'DNAPH.ZCX.HAPROX6.CERTAUTH'" cert-auth.der 

 

From your machine sftp to NFS server host: 

sftp dnaph@10.1.1.9 

cd certs 

get client-cert.der ( der is binary format) 

 

The above files need to be converted to .pem format: 

openssl pkcs12 -in client-cert.der -out client-cert.pem -clcerts -nodes 
Enter Import Password: 
( Enter the password used in the JCL above) 
chmod 664 client-cert.pem 

 

Also copy the cert-auth certificate to .pem mode. 

( You can simply copy and paste the contents to cert-auth.pem) 

chmod 664 cert-auth.pem 

Note that both client-cert and cert-auth files need to be in pem format with .pem file extensions. 

 

These files can be copied to the zcxocp-cli container or from wherever you are running the oc commands. 

 

AT-TLS Policy TLS 1.2 setup: 

To enable AT-TLS for NFS network communications between zCX for Red Hat OpenShift Container Platform(OCP) nodes running the HAPROXY and the z/OS NFS Server, you will need to add a rule to the Policy Agent (PAGENT) for the LPAR on which the z/OS NFS server resides. The rule should act on the inbound traffic to local port range 2043-2049 with a remote IP address that correlates to the DVIPA (or range of DVIPAs) of your OCP compute nodes.  The AT-TLS policy can be dynamically modified and activated so a client can potentially fall in or out of the scope of the AT-TLS rules. 

If your z/OS NFS server is configured to run on multiple LPARs for high availability, be sure to have the same policies applied to any LPAR on which the z/OS NFS server might be running. 

 

For details, see Getting started with AT-TLS in z/OS Communications Server: IP Configuration Guide

Below is a sample AT-TLS policy for the z/OS NFS Server (that requires both server and client authentication thru SAF)  

Figure 4 is a sample AT-TLS policy for the z/OS NFS server that requires both server and client authentication through SAF. It assumes an z/OSNFS server job name of MVSNFS8 and the zCX for OpenShift nodes IP addresses 10.1.1.1, 10.1.1.2 and 10.1.1.3 respectively. 

 

TTLSRule                          NFSServerEncryption          
{                                                              
  Jobname                         MVSNFS8                      
  Direction                       Inbound                      
  LocalPortRange                  2043:2049                    
  RemoteAddrGroupRef              haproxy_addrs                
  TTLSGroupActionRef              zCX_action1                  
  TTLSEnvironmentActionRef        NFSServerEnvironmentWithAuth 
}                                                
IpAddrGroup                       haproxy_addrs  
{                                                
  IPAddr                                         
  {                                              
    Addr 10.1.1.1 
  }                                              
  IPAddr                                         
  {                                              
    Addr 10.1.1.2                              
  }    
  IPAddr                                         
  {                                              
    Addr 10.1.1.3                              
  }                                          
}                                                       
TTLSGroupAction                   zCX_action1           
{                                                       
   TTLSEnabled                     On                    
   Trace                           1                     
}                                                       
TTLSEnvironmentAction NFSServerEnvironmentWithAuth      
{                                                       
   HandshakeRole ServerWithClientAuth                    
   TTLSKeyringParmsRef ServerKeyring                     
   TTLSEnvironmentAdvancedParms                          
   {                                                     
     ApplicationControlled Off                          
     HandshakeTimeout      60                                  
     ClientAuthType        SAFCheck                            
     TLSv1.2               On                                  
   }                                                           
}                                                             
TTLSConnectionAdvancedParms       TLSParms                    
{                                                             
   SSLv3                           Off                         
   TLSv1.2                         On                          
   SecondaryMap                    Off                         
}                                                             
TTLSKeyringParms                  ServerKeyring               
{                                                             
   Keyring                         MVSNFS8/ZCX-HAPROXY-RING    

Figure 4. Sample AT-TLS policy for the z/OS NFS server 

 

Make an NFS client fall out of scope of AT-TLS: 

 

To make an NFS client fall out of scope of AT-TLS, you can do the following: 

1.     Remove the NFS client’s IP address specified with IPAddr 

2.     Change the job name to something other than the current z/OS NFS server job name (for example, MVSNFS8) to no longer allow any NFS Client IP. 

This will bypass AT-TLS for all NFS traffic. 

If any of the above is performed, refresh the NFS server AT-TLS authentication by stopping the NFS server, performing refresh, and starting the NFS server. 

Consider the following when defining fields in the policy: 

·        Handshake Role = ServerWithClientAuth indicates that both server and clients need to be authenticated. 

·        ClientAuthType = SAFCheck indicates that the client needs to present a digital certificate that the z/OS NFS server will use to create a security context (ACEE) that represents the client for implicit MVS login. 

·        Specify all the OCP compute node DVIPAs that the AT-TLS policy applies to. 

·        Keyring that holds certauth certificate and server certificate...

 

Build the HAPROXY container image: 

1.      Create the haproxy.cfg file 

We tried with the following file: 

defaults 
    mode tcp 
    log global 
    option tcplog 
    option logasap 
    timeout connect 5000ms 
    timeout client 24h 
    timeout server 24h 
 
frontend nfsclient 
    mode tcp 
    option tcplog 
    option logasap 
    timeout connect 5000ms 
    timeout client 24h 
    timeout server 24h 
    bind *:2043-2049 
    default_backend nfsServer 
 
backend nfsServer 
    mode tcp 
    option tcplog 
    option logasap 
    timeout connect 5000ms 
    timeout client 24h 
    timeout server 24h 
    server nfsServer 10.1.1.9 maxconn 32 ssl verify required ca-file /var/lib/haproxy/server-cert.pem crt /var/lib/haproxy/client-cert.pem 

Fig 5. Sample haproxy.cfg file 

 

Using the Dockerfile such as this, build a custom HAPROXY image and push it to a repository: 

Disclaimer: At the time of writing this blog, it was tested with haproxy:3.0.0-bookworm image from icr.io. For later image versions, the Dockerfile may need updates.

FROM icr.io/ibmz/haproxy:3.0.0-bookworm
COPY haproxy.cfg /usr/local/etc/haproxy/haproxy.cfg 

 

Deploy haproxy containers on your OpenShift cluster. 

Create a new project 

oc new-project haproxy 

From oc CLI command line log in to your cluster. 

Ftp the generated client-cert.pem and cert-auth.pem. 

chmod 644 client-cert.pem 

chmod 644 cert-auth.pem 

oc create secret generic my-client --from-file=./client-cert.pem 

oc create secret generic ca --from-file=./cert-auth.pem 

Create haproxy-deployment.yaml 

apiVersion: apps/v1
kind: Deployment
metadata:
name: haproxy
namespace: haproxy1
spec:
selector:
matchLabels:
app: haproxy
replicas: 1
template:
metadata:
labels:
app: haproxy
security: safexp
spec:
containers:
- name: haproxy
image: "my-repo.com/sys-z/my-haproxy-image:tag"
volumeMounts:
- mountPath: /var/lib/haproxy
name: all
readOnly: true
ports:
- containerPort: 2049
protocol: TCP
- containerPort: 2043
protocol: TCP
- containerPort: 2044
protocol: TCP
- containerPort: 2045
protocol: TCP
- containerPort: 2046
protocol: TCP
- containerPort: 2047
protocol: TCP
- containerPort: 2048
protocol: TCP
securityContext:
capabilities:
drop:
- MKNOD
allowPrivilegeEscalation: false
imagePullPolicy: Always
imagePullSecrets:
- name: sys-ztest-artifactory
volumes:
- name: all
projected:
sources:
- secret:
name: my-client
- secret:
name: ca

Fig 6. haproxy-deployment.yaml 

 

oc apply -f haproxy-deployment.yaml 

Note that the mount dir is /var/lib/haproxy (This directory is owned by container user: haproxy:haproxy.) 

Also note the additional pod label added to this haproxy container: security=safexp. We will use it for pod affinity while deploying workload container.  

You can check the haproxy container logs to confirm that the deployment was successful. 

Create a service to talk to haproxy container: 

 

Create a service as the nfs-server in front of the haproxy container. 

Service is only accessible from within the cluster so no need of ACL statement in the haproxy.cfg: 

Create a haproxy-service.yaml 

apiVersion: v1 
kind: Service 
metadata
 
name: nfs-server 
 
namespace: haproxy1 
spec
 
ports
  -
name: 2043-tcp 
   
port: 2043 
   
protocol: TCP 
   
targetPort: 2043 
  -
name: 2044-tcp 
   
port: 2044 
   
protocol: TCP 
   
targetPort: 2044 
  -
name: 2045-tcp 
   
port: 2045 
   
protocol: TCP 
   
targetPort: 2045 
  -
name: 2046-tcp 
   
port: 2046 
   
protocol: TCP 
   
targetPort: 2046 
  -
name: 2047-tcp 
   
port: 2047 
   
protocol: TCP 
   
targetPort: 2047 
  -
name: 2048-tcp 
   
port: 2048 
   
protocol: TCP 
   
targetPort: 2048 
  -
name: 2049-tcp 
   
port: 2049 
   
protocol: TCP 
   
targetPort: 2049 
 
selector
   
app: haproxy 

Fig 7. haproxy-service.yaml 

 

Note that the service ports are mapped to pod ports: 

The service is assigned a static ClusterIP. 

Fig 8: Service details 

 

Create Service Class (SC): 

oc apply -f sc.yaml 

apiVersion: storage.k8s.io/v1 
kind: StorageClass 
metadata
 
name: nfs-saf 
provisioner: kubernetes.io/no-provisioner 
reclaimPolicy: Delete 

Fig 9: sc.yaml 

 

Create Persistent Volume (PV)  

oc apply pv.yaml 

Note that the highlighted IP address here is that of the haproxy Service’s  ClusterIP. 

kind: PersistentVolume 
apiVersion: v1 
metadata
 
name: fios-pv 
spec
 
capacity
   
storage: 2Gi 
 
nfs
   
server: '172.30.226.71,rw,nfsvers=4' 
   
path: '/hfs/oc4z/nfs/OCPTSTB/fios-test,mvsmnt' 
 
accessModes
    - ReadWriteMany 
 
persistentVolumeReclaimPolicy: Delete 
 
storageClassName: nfs-saf 
 
volumeMode: Filesystem  

Fig 10: pv.yaml 

 

Run NFS workload: 

Create a separate project to run the workload. 

oc new-project fios 

Create a pvc.yaml that references the above StorageClass. 

oc apply -f pvc.yaml 

kind: PersistentVolumeClaim 
apiVersion: v1 
metadata
 
name: fio-pvc1 
spec
 
accessModes
    - ReadWriteMany 
 
resources
   
requests
     
storage: 2Gi 
 
storageClassName: nfs-saf 
 
volumeMode: Filesystem 

Fig 11: pvc.yaml 

 

Create a secret for docker pull: 

oc create secret docker-registry my-registry --docker-server=my-repo.com --docker-username=xyz@ibm.com --docker-password=<> 

 

Deploy the application. 

oc apply –f fios-deployment.yaml 

apiVersion: apps/v1 
kind: Deployment 
metadata
 
name: fios 
 
namespace: fios 
spec
 
replicas: 1 
 
selector
   
matchLabels
     
name: fios 
 
template
   
metadata
     
labels
       
name: fios 
   
spec
     
affinity
       
podAffinity
         
requiredDuringSchedulingIgnoredDuringExecution
          -
labelSelector
             
matchExpressions
              -
key: security 
               
operator: In 
               
values
                - safexp 
           
topologyKey: node-role.kubernetes.io/worker 
           
namespaces
            - haproxy1 
     
volumes
        -
name: fio-test-volume 
         
persistentVolumeClaim
           
claimName: fio-pvc 
     
containers
        -
name: fios 
         
image: 'my-repo.com/sys-z/pv-test:latest' 
         
env
            -
name: numFiles 
             
value: '5' 
           
- name: totalSize 
             
value: '1' 
         
resources: {} 
         
volumeMounts
            -
name: fio-test-volume 
             
mountPath: /pv-test-fs 
         
imagePullPolicy: Always 
     
restartPolicy: Always 
     
dnsPolicy: ClusterFirst 
     
imagePullSecrets
        -
name: sys-loz-artifactory 
     
schedulerName: default-scheduler 

Fig 12: fios-deployment.yaml 

 

Note that owing to pod affinity, this pod is scheduled on the same zCX for OpenShift cluster node as the pod with label security=safexp and namespace of the haproxy container. This ensures that the nfs workload pods always runs on the same node as the node on which haproxy container runs. This ensure there is cross compute node network traffic which is not encrypted. 

For high availability, if the zCX for OpenShift compute node running these containers fails, then  haproxy and subsequently the workload container will come up on a different but same zCX for OpenShift compute node due to pod affinity. 

Note that the mounted directory has permissions 700: 

drwx------   2 NFSTST   SVTGRP      8192 Aug 28 21:20 fios-test 

As the client-cert.pem is associated with the ‘nfstst’ userid, the directory permission bits are checked and the container can read and write to this directory. 

 

The NFS workload container can run as any container userid managed by the Red Hat OpenShift: 

Fig 13: Fios workload container user id. 

Containerized applications running inside zCX for OpenShift cluster  security 'safexp' of z/OS NFS server.

References:  

1.      https://www.ibm.com/docs/en/zos/2.5.0?topic=zcx-setting-up-zos-nfs-server 

2.      https://www.ibm.com/docs/en/zos/2.5.0?topic=customization-configuring-zos-nfs-server 

3.      https://www.ibm.com/docs/en/zos/2.5.0?topic=reference-application-transparent-transport-layer-security-tls 

1 comment
54 views

Permalink

Comments

Wed February 05, 2025 03:01 PM

Great post with nice details!

Thank you!