zPET - IBM Z and z/OS Platform Evaluation and Test

zPET - IBM Z and z/OS Platform Evaluation and Test

zPET - IBM Z and z/OS Platform Evaluation and Test

Experiences and tips from a team of system programmers and testers who run a Parallel Sysplex on which we perform the final verification of a z/OS release and System z hardware and System Storage before they become generally available to clients.

 View Only

zPET Parallel Sysplex Environment – 2024

By Alex Diss posted Mon July 01, 2024 01:44 PM

  

Here we describe our Parallel Sysplex computing environment, including information about our hardware and software configurations.

Note: In our publications, when you see the term sysplex, understand it to mean a sysplex with a coupling facility, which is a Parallel Sysplex.

Overview of our Parallel Sysplex Environment

We run two Parallel Sysplexes, one with 15 members and the other with 4 members. Even though the configuration changes are constant due to various projects we are involved in, our standing business as usual configuration is as follows:

CPC Type Plex 1 z/OS LPARs Plex 2 z/OS LPARs Plex 1 CFs Plex 2 CFs
z16 6 2 1 ICF 1 ICF
z15 5 2 1 ICF 1 ICF
z15* - - 1 ICF 1 ICF
z14 4 - 1 ICF 1 ICF
TOTAL 15-way production sysplex 4-way test sysplex 4 production CFs 4 test CFs

*Standalone Coupling Facility

LPARs use various numbers of shared CPs ranging from approximately 8 to 32 as well as various numbers of shared zIIPs ranging from approximately 2 to 20, and LPAR storage allocations range from 64 GB to 6 TB.

Outside of the Parallel Sysplex itself, we also have multiple LPARs that run native Linux as well as LPARs that run z/VM images that host multiple Linux guest images running in virtual machines.

For CTC communications, we have FICON CTC connections to and from a subset of our images and use coupling facility structures for signaling between all systems in each sysplex.

Coupling Facility LPARs in our test sysplex are typically configured with 1 dedicated ICF processor. Coupling Facility LPARs in our production sysplex are typically configured with between 6 to 8 dedicated ICF processors. At times, we’ve also configured CFs on the same CPCs to share ICFs in various dynamic dispatching modes (i.e DYNDISP = OFF, ON, or THIN).

For Coupling Facility channels, we use a combination of ICP, CS5, and CL5 coupling facility channels in peer mode.  Our z16 uses both CS5 and CL5 coupling connectivity to our z15 & z14 processors.  We use MIF to logically share coupling facility channels among the logical partitions on a CPC. We define at least two physical paths from every system image to each coupling facility, and from every coupling facility to each of the other coupling facilities.

We have FICON native (FC) mode channels from all our CPCs to our Enterprise Storage Servers and our 3590 tape drives through native FICON switches (See FICON Native Implementation and Reference Guide, SG24- 6266, for information about how to set up this and other native FICON configurations).  Additionally, we have zHyperLink connectivity from our z14, z15, & z16 to DS8950 DASD (See Getting Started with IBM zHyperLink for z/OS, REDP-5493, for information on how to set this up).

For DASD, we use IBM System Storage DS8950s Series and DS8886s.

For Tape, we use an Automated Tape Library (ATL) that is 3494 Model L10 with 32 FICON-attached 3592 tape drives. We also exploit a Virtual Tape Server (VTS) that is TS7700 (standalone) with 32 virtual 3490E tape drives.

Our Parallel Sysplex Software Configuration

We run the z/OS operating system (V3R1) along with the following software products:

  • CICS Transaction Server (CICS TS) Version 5.4 + Tools
  • CICS Transaction Server (CICS TS) Version 5.6 + Tools
  • CICS Transaction Server (CICS TS) Version 6.1 + Tools
  • IMS Version 15.4 (and its associated IRLM) + Tools
  • DB2 for z/OS V13 (and it associated IRLM) + Tools
  • WebSphere Application Server for z/OS V9.5
  • WebSphere Application Server for z/OS Liberty
    • V18.0.0.x
    • V19.0.0.6
    • V20.0.0.3
    • V21.0.0.7
  • IBM Integration Bus Version V10
  • IBM MQ V9.3.0.16 Long Term Support 
  • IBM MQ V9.3.5 Continuous Delivery 
  • IBM Multi-Factor Authentication for z/OS V2R2
  • IBM Infosphere Data Replication
    • Q Replication V11.4.0
    • SQL Replication V11.4
  • IBM Open Data Analytics for z/OS V1.1.0
  • IBM Watson Machine Learning for z/OS 2.3.0
  • IBM Tivoli zSecure Suite V2.5 Admin
  • IBM Tivoli zSecure Suite V2.5 Audit
  • IBM Tivoli Netview for z/OS V6.3
  • IBM Tivoli OMEGAMON XE FOR STORAGE V5.5.0 
  • IBM Tivoli OMEGAMON XE FOR zOS V5.5.0
  • IBM Tivoli OMEGAMON XE FOR zOS V5.6.0
  • IBM Tivoli OMEGAMON XE FOR zOS V6.1.0
  • IBM Tivoli OMEGAMON XE FOR IMS V5.5.0 
  • IBM Tivoli OMEGAMON XE FOR CICS V5.5.0
  • IBM Tivoli OMEGAMON XE FOR MESSAGING V7.5.0
  • IBM Tivoli OMEGAMON XE FOR MESSAGING-INTEGRATION BUS V7.5.0
  • IBM Tivoli OMEGAMON XE FOR DB2 PE V5.5.0
  • IBM Tivoli OMEGAMON XE FOR JVM V5.5.0
  • IBM Tivoli OMEGAMON XE FOR JVM V6.1.0
  • IBM Tivoli OMEGAMON XE FOR NETWORKS V5.5.0
  • IBM Tivoli OMEGAMON XE FOR NETWORKS V5.6.0
  • IBM Tivoli OMEGAMON XE FOR NETWORKS V6.1.0
  • IBM Z Digital Integration Hub (zDIH) V2.1.1
  • IBM z/OSMF V3.1
  • IBM z/OS Connect V3
  • GDPS Continuous Availability Solution V4.6.0 
  • TPNS V5.3 and WSim V1.1.0.1

Overview of our Software Configuration

The below figure shows a high-level view of our sysplex software configuration.

Figure 1. Our sysplex software configuration

We run five separate application groups in one sysplex and each application group spans multiple systems in the sysplex. Table 1 provides an overview of the types of transaction management, data management, and serialization management that each application group uses.

Application Groups Transaction Management Data Management Serialization Management
Group 1 CICS and IMS TM IMS DB IRLM
Group 2 CICS VSAM VSAM RLS
Groups 3, 5, and 6 CICS and IBM TM DB2 IRLM

Table 1. Our production OLTP application groups

About our naming conventions

We designed the naming convention for our CICS regions so that the names relate to the application groups and system names that the regions belong to. This is important because:

  • Relating a CICS region name to its application groups means we can use wildcards to retrieve information about, or perform other tasks in relation to, a particular application group.
  • Relating CICS region names to their respective z/OS system names means that subsystem job names also relate to the system names, which makes operations easier. This also makes using automatic restart management easier for us — we can direct where we want a restart to occur, and we know how to recover when the failed system is back online.

Our CICS regions have names of the form CICSgrsi where:

  • g = the application group; either 1, 2, 3, 5 or 6
  • r = the CICS region type; either A for AORs, F for FORs, T for TORs, or W for WORs (Web regions)
  • s = the system name; can be 0 for system Z0, 8 for J80, 9 for J90, A for JA0, and so on
  • i = the instance of the region; can be A, B, C, and so on (we may have 3 AORs per system)

For example, the CICS region named CICS2A0A would be the first group 2 AOR on system Z0.

Our IMS subsystem job names also correspond to their z/OS system name. They take the form IMSs where s represents the system name, as explained above for the CICS regions.

Overview of our Security Environment

We run the following security products and solutions in our environment:

  • IBM z/OS Security Server RACF
  • IBM z/OS Integrated Cryptographic Service Facility (ICSF)
  • IBM Tivoli Directory Server (LDAP)
  • Encryption Facility for z/OS and OpenPGP
  • PKI Services for z/OS
  • IBM Security zSecure Admin, Audit and Alert
  • IBM Z Multi-Factor Authentication
  • RSA Authentication Manager
  • SafeNet Authentication Service
  • IBM Z Pervasive Encryption
  • IBM z/OS Authorized Code Scanner (zACS)
  • IBM z/OS Authorized Code Monitor (zACM)
  • Enterprise Key Management Foundation (EKMF)
  • Enterprise Key Management Foundation Web Edition (EKMF Web)
  • IBM Security Guardium Key Lifecycle Manager
  • Validated Boot for z/OS

z/OS Integrated Cryptographic Service Facility (ICSF) is a software element of z/OS that works with the hardware cryptographic features and the Security Server (RACF) to provide secure, high-speed cryptographic services in the z/OS environment.

ICSF interacts with various cryptographic hardware features installed on servers.  We currently have the following cryptographic hardware features in our environment:

  • Crypto Express8 Enterprise PKCS #11 Coprocessor (CEX8P)
  • Crypto Express8 Accelerator (CEX8A)
  • Crypto Express8 Coprocessor (CEX8C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express7 Enterprise PKCS #11 Coprocessor (CEX7P)
  • Crypto Express7 Accelerator (CEX7A)
  • Crypto Express7 Coprocessor (CEX7C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express6 Enterprise PKCS #11 Coprocessor (CEX6P)
  • Crypto Express6 Accelerator (CEX6A)
  • Crypto Express6 Coprocessor (CEX6C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express5 Enterprise PKCS #11 Coprocessor (CEX5P)
  • Crypto Express5 Accelerator (CEX5A)
  • Crypto Express5 Coprocessor (CEX5C)
  • CP Assist for Cryptographic Functions (CPACF)
  • CP Assist for Cryptographic Functions DES/TDES Enablement (CPACF, feature 3863)

On each sysplex within our environment, we share the CKDS, PKDS, and TKDS data sets in KDSRL format among the systems.

Since our goal is to run a customer-like environment, we have various workloads which take advantage of the products that interface with. These products include the following:

  • DB2
  • CICS
  • IMS
  • LDAP
  • PKI Services
  • SSL (through WebSphere Application Server, FTP, HTTP, LDAP and CICS)
  • IBM Security Key Lifecycle Manager for z/OS
  • WebSphere Application Server for z/OS
  • IBM MQ
  • IBM IIDR Q Replication

Our TDS servers are respectively configured with TDBM backend which connects LDAP to the DB2 Database Directory, SDBM backend which connects to the RACF directory on our sysplex, and LDBM backend which connects to a z/OS UNIX file system on our sysplex. Some servers are configured with special functions enabled such as referral, replication, and persistent search. The environment is exploited through different workloads and exploiters for following transactions/functions:

  • LDAP Referral, alias, timing function
  • Master-replica, peer-to-peer and advanced gateway replication within our sysplex
  • Persistent Search function
  • TLS/SSL support
  • RACF access through SDBM backend

Conclusion

Our environment is constantly evolving to include both the latest hardware and software features. As it evolves, we will continue to document the changes and updates, as well as how each component complements another.

Should you have any questions or comments about any of our environment details listed above, please do not hesitate to reach out to our team.

0 comments
26 views

Permalink