zPET - IBM Z and z/OS Platform Evaluation and Test - Group home

zPET Parallel Sysplex Environment – 2022

By Brittany Ross posted Wed July 27, 2022 10:53 AM


Here we describe our Parallel Sysplex computing environment, including information about our hardware and software configurations.

Note: In our publications, when you see the term sysplex, understand it to mean a sysplex with a coupling facility, which is a Parallel Sysplex.

Overview of Our Parallel Sysplex Environment

We run two Parallel Sysplexes, one with 15 members and the other with 4 members. Even though the configuration changes are constant due to various projects we are involved in, our standing business as usual configuration is as follows:

CPC Type

Plex1 z/OS LPARs

Plex2 z/OS LPARs

Plex1 CFs

Plex2 CFs






















15-way production sysplex

4-way test sysplex

5 production CFs

4 test CFs


*Stand Alone Coupling Facility


LPARs use various numbers of shared CPs ranging from approximately 8 to 36 as well as various numbers of shared zIIPs ranging from approximately 2 to 20, and LPAR storage allocations range from 30 GB to 1 TB.

Outside of the Parallel Sysplex itself, we also have multiple LPARs that run native Linux as well as LPARs that run z/VM images that host multiple Linux guest images running in virtual machines.

For CTC communications, we have FICON CTC connections to and from a subset of our images and use coupling facility structures for signaling between all systems in each sysplex.

Coupling Facility LPARs in our test sysplex are typically configured with 1 dedicated ICF processor. Coupling Facility LPARs in our production sysplex are typically configured with between 4 to 6 dedicated ICF processors. At times, we’ve also configured CFs on the same CPCs to share ICFs in various dynamic dispatching modes (i.e DYNDISP = OFF, ON or THIN).

For Coupling Facility channels, we use a combination of IC, CS5, and CL5 coupling facility channels in peer mode.  Our z16 uses both CS5 and CL5 coupling connectivity to our z15 & z14 processors.  We use MIF to logically share coupling facility channels among the logical partitions on a CPC. We define at least two physical paths from every system image to each coupling facility, and from every coupling facility to each of the other coupling facilities.

We have FICON native (FC) mode channels from all our CPCs to our Enterprise Storage Servers and our 3590 tape drives through native FICON switches (See FICON Native Implementation and Reference Guide, SG24- 6266, for information about how to set up this and other native FICON configurations).  Additionally, we have zHyperLink connectivity from our z15 & z16 to DS8950 DASD (See Getting Started with IBM zHyperLink for z/OS, REDP-5493, for information on how to set up this).

For DASD, we use IBM System Storage DS8950s Series and DS8886s.

For Tape, we use an Automated Tape Library (ATL) that is 3494 Model L10 with 32 Ficon attached 3592 tape drives. We also exploit a Virtual Tape Server (VTS) that is TS7700(standalone) with 32 virtual 3490E tape drives.

Our Parallel Sysplex Software Configuration

We run the z/OS operating system (V2R5) along with the following software products:

  • CICS Transaction Server (CICS TS) Version 5.4 + Tools
  • CICS Transaction Server (CICS TS) Version 5.6 + Tools
  • IMS Version 15.2 (and its associated IRLM) + Tools
  • DB2 for z/OS V12 (and it associated IRLM) + Tools
  • WebSphere Application Server for z/OS V9.5
  • WebSphere Application Server for z/OS Liberty
  • 0.0.x
  • 0.0.6
  • 0.0.3
  • 0.0.7
  • IBM Integration Bus Version V10
  • IBM MQ V9.2.0.5 Long Term Support + Tools
  • IBM MQ V9.2.5 Continuous Delivery + Tools
  • IBM Operational Decision Manager V8.9
  • IBM Multi-Factor Authentication for z/OS V2R2
  • IBM Infosphere Data Replication
  • Q Replication V11.4.0
  • SQL Replication V11.4
  • IBM Open Data Analytics for z/OS V1.1.0
  • IBM Watson Machine Learning for z/OS 2.3.0
  • IBM Tivoli zSecureTM Suite V2.5 Admin
  • IBM Tivoli zSecureTM Suite V2.5 Audit
  • IBM Tivoli Netview for z/OS V6.3
  • IBM Tivoli OMEGAMON XE FOR zOS V5.5.0
  • IBM Tivoli OMEGAMON XE FOR zOS V5.6.0
  • IBM Tivoli OMEGAMON XE FOR DB2 PE V5.4.0
  • IBM z/OSMF V2.5
  • IBM z/OS Connect V3
  • GDPS Continuous Availability Solution V2.3
  • TPNS V5.3


Overview of Our Software Configuration

The below figure shows a high-level view of our sysplex software configuration.


Figure 1. Our sysplex software configuration


We run five separate application groups in one sysplex and each application group spans multiple systems in the sysplex. Table 1 provides an overview of the types of transaction management, data management, and serialization management that each application group uses.

Application Groups

Transaction Management

Data Management


Group 1




Group 2




Groups 3, 5 and 6




Table 1. Our production OLTP application groups

About our naming conventions

We designed the naming convention for our CICS regions so that the names relate to the application groups and system names that the regions belong to. This is important because:

  • Relating a CICS region name to its application groups means we can use wildcards to retrieve information about, or perform other tasks in relation to, a particular application group.
  • Relating CICS region names to their respective z/OS system names means that subsystem job names also relate to the system names, which makes operations easier. This also makes using automatic restart management easier for us — we can direct where we want a restart to occur, and we know how to recover when the failed system is back online.

Our CICS regions have names of the form CICSgrsi where:

  • g represents the application group, and can be either 1, 2, 3, 5 or 6
  • r represents the CICS region type, and can be either A for AORs, F for FORs, T for TORs, or W for WORs (Web server regions)
  • s represents the system name, and can be 0 for system Z0, 8 for J80, 9 for J90, and A for JA0, and so on
  • i represents the instance of the region and can be A, B, C, and so on (we may have 3 AORs in our application group per system)

For example, the CICS region named CICS2A0A would be the first group 2 AOR on system Z0.

Our IMS subsystem job names also correspond to their z/OS system name. They take the form IMSs where s represents the system name, as explained above for the CICS regions.

Overview of our Security environment

We run the following security products and solutions in our environment:

  • IBM z/OS Security Server RACF
  • IBM z/OS Integrated Cryptographic Service Facility (ICSF)
  • IBM Tivoli Directory Server (LDAP)
  • Encryption Facility for z/OS and OpenPGP
  • PKI Services for z/OS
  • IBM Security zSecure Admin, Audit and Alert
  • IBM Z Multi-Factor Authentication
  • RSA Authentication Manager
  • SafeNet Authentication Service
  • IBM Z Pervasive Encryption
  • IBM z/OS Authorized Code Scanner (zACS)
  • External Key Management Facility (EKMF)
  • IBM Security Guardium Key Lifecycle Manager

z/OS Integrated Cryptographic Service Facility (ICSF) is a software element of z/OS that works with the hardware cryptographic features and the Security Server (RACF) to provide secure, high-speed cryptographic services in the z/OS environment.


ICSF interacts with various cryptographic hardware features installed on servers.  We currently have the following cryptographic hardware features in our environment:

  • Crypto Express8 Enterprise PKCS #11 coprocessor (CEX8P)
  • Crypto Express8 Accelerator (CEX8A)
  • Crypto Express8 Coprocessor (CEX8C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express7 Enterprise PKCS #11 coprocessor (CEX7P)
  • Crypto Express7 Accelerator (CEX7A)
  • Crypto Express7 Coprocessor (CEX7C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express6 Enterprise PKCS #11 coprocessor (CEX6P)
  • Crypto Express6 Accelerator (CEX6A)
  • Crypto Express6 Coprocessor (CEX6C)
    • Compliance Mode
    • Normal Mode
  • Crypto Express5 Enterprise PKCS #11 coprocessor (CEX5P)
  • Crypto Express5 Accelerator (CEX5A)
  • Crypto Express5 Coprocessor (CEX5C)
  • CP Assist for Cryptographic Functions (CPACF)
  • CP Assist for Cryptographic Functions DES/TDES Enablement (CPACF, feature 3863)


On each sysplex within our environment, we share the CKDS, PKDS, and TKDS

data sets among the systems.


Since our goal is to run a customer-like environment, we have various workloads

which take advantage of the products that interface with. These products include the following:

  • DB2
  • CICS
  • IMS
  • LDAP
  • PKI Services
  • SSL (through WebSphere Application Server, FTP, HTTP, LDAP and CICS)
  • IBM Security Key Lifecycle Manager for z/OS
  • WebSphere Application Server for z/OS
  • IBM MQ

We have a multiplatform LDAP configuration for the IBM Tivoli Directory Server (IBM TDS) environment. In addition to exploiting LDAP for z/OS, we perform cross-platform testing with this environment. Tivoli Directory Server on distributed platforms is a cross-platform exploiter of z/OS LDAP.

Our TDS servers are respectively configured with TDBM backend which connects LDAP to the DB2 Database Directory, SDBM backend which connects to the RACF directory on our sysplex, and LDBM backend which connects to a z/OS UNIX file system on our sysplex. Some servers are configured with special functions enabled such as referral, replication, and persistent search. The environment is exploited through different workloads and exploiters for following transactions/functions:

  • LDAP Referral function
  • Master-replica, peer-to-peer and advanced gateway replication within our sysplex
  • Master-replica, peer-to-peer replication between z/OS and distributed platforms
  • Persistent Search function
  • RACF access through SDBM backend
  • IBM Z Security and Compliance Center



Our environment is constantly evolving to include both the latest hardware and software features. As it evolves, we will continue to document the changes and updates, as well as how each component compliments each other.


Should you have any questions or comments about any of our environment details listed above, please do not hesitate to reach out to our team.