z/OS Platform Evaluation and Test (zPET) runs customer-like workloads in a Parallel Sysplex environment to perform the final verification of new IBM Z hardware and software.
Introduction
Should I read this blog?
If you don’t already know what IBM Z Digital Integration Hub (zDIH) is, or what capabilities it offers, we strongly recommend reading our previous blog, zPET Experiences with Z Digital Integration Hub (zDIH), as a prerequisite to reading this one. In our previous blog, we give a brief introduction to zDIH, provide links to official zDIH guides & documentation, and provide some insights that could be helpful to beginners on zDIH.
Why should I read this blog?
In our previous zDIH blog post we gave a crash course in how to configure your installation: how we structured our USS filesystem, how to configure logging, how to configure clusters and cluster members, hints and tips, and do’s and don’ts. In this blog, we will expand on some of the more advanced features of zDIH beyond basic setup. This includes:
- Installing & running the Management Center (MC) to manage your zDIH clusters
- Connecting to a cluster within the Management Center
- Loading data to a cluster using zDIH-provided sample applications
- Viewing maps & statistics via the Management Center
- Querying your data via the Management Center
For the rest of this blog we will assume you have a basic understanding of the product. This blog is not meant to be a replacement for the product documentation, but rather a cook book to fast track your learning of IBM zDIH. See the IBM Z Digital Integration Hub documentation at https://www.ibm.com/docs/en/zdih for complete installation and customization instructions.
Management Center - Installing & Running
The zDIH Management Center (MC) is the primary interface that you use to monitor and manage zDIH clusters, members, caches and zDIH application jobs. The MC runs on Linux, Unix, and Windows environments, but is not supported on z/OS. The MC executables are included with the zdih-client-2.1.x.zip
file that comes in the zFS installation directory.
The following steps depict how we have deployed & run the MC for usage in zPET:
- Note: Before you start - The IBM zDIH Management Center requires Java 11 or above release of IBM Semeru Runtime, any OpenJDK distribution or Oracle JDK. Ensure the machine on which you will install the MC meets this requirement!
- On zOS USS, navigate to the zDIH installation directory which contains
zdih-client-2.1.x.zip
- Following zDIH documentation for Configuring z/OS UNIX System Services environment variables for IBM zDIH, your zDIH installation may exist in the default location:
/usr/lpp/IBM/zdih/v2r1
- On our zPET system, we have customized this to something like
/zdih/levels/<current zDIH release dir>/zdih210
- Alongside
zdih-client-2.1.x.zip
you’ll see zdih-samples-2.1.x.zip
and folders like bin
, config
, etc
- Transfer
zdih-client-2.1.x.zip
to your local workstation
- In zPET, we’ve used tools such as Secure Copy (SCP) or SFTP
- Extract the zip file
- Optionally, customize your MC installation
- Since this step is highly dependent on your individual environment and needs, we’ll refer you to the official zDIH documentation for customizing the Management Center
- Navigate to
<zdih_client_extracted_path>/management-center/bin
and execute the command zdih-mc start
- On Windows machines, this command is
zdih-mc.cmd start
- Once you see the message Hazelcast Management Center successfully started at
http://localhost:8080/
your MC instance has been started!
If you navigate to http://localhost:8080/
and this is the first time the MC is being connected to, you will see the Security Configuration page:

Without additional customization of its configuration, the MC starts in Dev mode. This allows you to utilize it without needing credentials for logging in or using the REST API. Also, you are automatically logged into the MC as an admin. Please note, delving into the security features of the MC is out of scope for this blog post, so we’ll just move forward assuming that dev mode is enabled.
Helpful tip about the MC: Once you’ve logged into the MC, it should automatically create a file on your machine in your home directory, similar to C:\Users\<userid>\zdih-mc\sql\mc.mv.db
(the directories are automatically created as well). This file is important to remember as it must be deleted in order to change the security configuration. This is useful if:
- You have already configured the MC in dev mode but wish to enable security
- You are an advanced user and have enabled security, but wish to change the type of security (i.e. change from Local security to JAAS security)
Management Center - Connecting to a Cluster
Now that your MC instance is up and running, you should be greeted with a screen that looks similar to the following:

From here, you can start connecting to your zDIH clusters. There are 2 ways to accomplish this: by connecting directly and by uploading a client config file.
To connect directly, simply enter the information for your cluster on the Connect Directly tab. You will need the cluster name and member address & port, all of which can be found in the cluster member’s configuration file - zdih.xml
, which is located in the zDIH work directory or installation path.

Note: For a multi-node zDIH cluster, you can specify node addresses comma separated (e.g. host1.com:5701, host2.com:5702
)
To upload a client config, you will need to prepare an xml file. Luckily zDIH provides an example file in your MC installation directory <your unzipped folder>/management-center/config/zdih-client.xml
. To go this route simply open the file in an editor of your choice (we’d recommend making a backup copy of it first), customize it for your zDIH cluster (namely the cluster name & address), then drag and drop it into the MC in your browser. Please note that this option is required when you want to enable security or additional customization on the cluster connection.

Once you’ve followed one of the above steps, your cluster should automatically connect assuming that you’ve entered all the information properly and your cluster member has started without errors.

By clicking view cluster you will be taken to the Dashboard tab which shows resource utilization metrics

You can also go to the Members tab to view members in the cluster. In the image below we blacked out the member IP & port information for security reasons, but you can expect to see your host information there when you try this on your own.

There are additional features to the MC beyond these two tabs, and we will highlight some of these in the later sections of this blog.
Loading Data via Sample Application
In this section, we will show how the sample applications that are provided with the zDIH installation can be used to create and prime zDIH caches on z/OS. We will use a pre-packaged version of one of these applications to load sample data-at-rest from a data set into the zDIH cache.
First, we need to create the zDIH application, but luckily zDIH provides a sample zDIH application for you to work with. The following steps will take you through creating and configuring the sample application. These steps are also documented in the zDIH official documentation for starting an application.
- On zOS USS, navigate to the directory containing your
zdih-sample-x.x.x.zip
- This should be in the same directory where you found
zdih-client-2.1.x.zip
in the previous section
- Transfer
zdih-sample-x.x.x.zip
to your local workstation
- Extract the zip file
- This will result in a new directory
zdih-samples
which contains the dataset
and logstream
directories. Each of these is a separate sample application, for loading data from datasets or a logstream respectively.
- Build the desired sample application via Maven
- For the context of this blog post we will refer to only the
dataset
sample application
- Each of the sample applications is provided with a
pom.xml
, so that you can import and build them as standard Maven projects. You can import each folder as a project into the IDE of your choice, but we will not cover this in any further detail here. The readme.md
file in each of the application folders can be used as a guide for this step
- Transfer the following application jars resulting from the build to zOS USS. They are written to the target sub-folder where you did the build
- Transfer the application configuration file
src/main/resources/yaml/sample_process_config.yaml
to zOS USS as well
- Once the files are transferred, the remaining steps will be conducted on zOS USS
- Ensure your files are arranged appropriately
sample-zdih-dataset-1.0.0.jar
& sample-zdih-dataset-1.0.0-jar-with-dependencies.jar
should be uploaded to an application directory. These will be referenced in the script we will create in a moment
sample-zdih-models.jar
may be located elsewhere, but it will be referenced by your zDIH server directly. Once you’ve chosen a location for the jar you can update ZDIH_USER_CLASSPATH=<zDIH_user_classpath>
in zdihserv.env
sample_process_config.yaml
can be placed anywhere, but again this will be referenced by the script we will create
- At this point, your directory structure should look something like this
- Customize
sample_process_config.yaml
- This YAML file is basically what tells the application where to get its data and where to put it, thus you need to customize it for the task at hand. Here you will utilize sample data that is available as part of your zDIH installation.
- You will need to edit the following fields in the sample file:
server_address
– update with your zDIH server address & port as colon separated
cluster_name
– update with your cluster name
- The datasets YAML node will contain 2 entries each for dataset_name and cache_name
dataset_name
- ZDIH.HVL.SHVLSDAT(HVLDACCT)
– this dataset should exist on your system as part of your zDIH installation
cache_name
– defaults to SAMPLE_ACCOUNT – you may update if you wish
dataset_name
- ZDIH.HVL.SHVLSDAT(HVLDTRAN)
– this dataset should exist on your system as part of your zDIH installation
cache_name
– defaults to SAMPLE_TRANSACTION - you may update if you wish
- Please note that what you define for cache_name is what you will observe in the MC in further steps
- Create a script to run the sample dataset application
- Finally, we are ready to run the script and actually load some data! Assuming that everything has gone correctly up to this point, you should get some output that looks something like this:
-
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.spi.ClientInvocationService
INFO: sample [zdih4share] [5.3.2] Running with 2 response threads, dynamic=true
Feb 09, 2025 5:44:51 PM com.hazelcast.core.LifecycleService
INFO: sample [zdih4share] [5.3.2] HazelcastClient 5.3.2 (20230821 - 8d62ceb, b6cb479) is STARTING
Feb 09, 2025 5:44:51 PM com.hazelcast.core.LifecycleService
INFO: sample [zdih4share] [5.3.2] HazelcastClient 5.3.2 (20230821 - 8d62ceb, b6cb479) is STARTED
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.connection.ClientConnectionManager
INFO: sample [zdih4share] [5.3.2] Trying to connect to cluster: zdih4share
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.connection.ClientConnectionManager
INFO: sample [zdih4share] [5.3.2] Trying to connect to [xxx.xxx.x.xxx]:5701
Feb 09, 2025 5:44:51 PM com.hazelcast.core.LifecycleService
INFO: sample [zdih4share] [5.3.2] HazelcastClient 5.3.2 (20230821 - 8d62ceb, b6cb479) is CLIENT_CONNECTED
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.connection.ClientConnectionManager
INFO: sample [zdih4share] [5.3.2] Authenticated with server [xxx.xxx.x.xxx]:5701:3ec6bca6-71d9-4f9e-af59-2e7461083874, server version: 5.3.2, local address: /xxx.xxx.x.xxx:61238
Feb 09, 2025 5:44:51 PM com.hazelcast.internal.diagnostics.Diagnostics
INFO: sample [zdih4share] [5.3.2] Diagnostics disabled. To enable add -Dhazelcast.diagnostics.enabled=true to the JVM arguments.
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.spi.ClientClusterService
INFO: sample [zdih4share] [5.3.2]
Members [1] {
Member [xxx.xxx.x.xxx]:5701 - 3ec6bca6-71d9-4f9e-af59-2e7461083874
}
Feb 09, 2025 5:44:51 PM com.hazelcast.client.impl.statistics.ClientStatisticsService
INFO: Client statistics is enabled with period 5 seconds.
[INFO ] 2025-02-12 08:35:37.168 [pool-2-thread-2] DatasetReader - datasetType = 2401 datasetName = //'ZDIH.HVL.SHVLSDAT(HVLDTRAN)' datasetLrecl = 4096 datasetOptions = rb,type=record
[INFO ] 2025-02-12 08:35:37.168 [pool-2-thread-1] DatasetReader - datasetType = 51228 datasetName = //'ZDIH.HVL.SHVLSDAT(HVLDACCT)' datasetLrecl = 4096 datasetOptions = rb,type=record
[DEBUG] 2025-02-12 08:35:37.317 [zDIHThread0] TransactionRecordProcessor - TransactionRecordProcessor: Entry time: 2025-02-12 08:35:37,317
[DEBUG] 2025-02-12 08:35:37.317 [zDIHThread0] AccountRecordProcessor - AccountRecordProcessor: Entry time: 2025-02-12 08:35:37,317
[DEBUG] 2025-02-12 08:35:37.322 [zDIHThread1] AccountRecordProcessor - AccountRecordProcessor: Entry time: 2025-02-12 08:35:37,322
[DEBUG] 2025-02-12 08:35:37.323 [zDIHThread1] TransactionRecordProcessor - TransactionRecordProcessor: Entry time: 2025-02-12 08:35:37,323
[INFO ] 2025-02-12 08:35:37.407 [pool-2-thread-1] DatasetReader - SAMPLE_ACCOUNT Data set Record Count = 1000
[INFO ] 2025-02-12 08:35:37.420 [pool-2-thread-1] DatasetReader - Total Record count = 1000 | 51228 Record Count = 1000 | 2401 Record Count = 0 | Unknown Record Count = 0
[INFO ] 2025-02-12 08:35:37.420 [pool-2-thread-2] DatasetReader - SAMPLE_TRANSACTION Data set Record Count = 1000
[INFO ] 2025-02-12 08:35:37.445 [pool-2-thread-2] DatasetReader - SAMPLE_TRANSACTION Data set Record Count = 2000
[INFO ] 2025-02-12 08:35:37.470 [pool-2-thread-2] DatasetReader - SAMPLE_TRANSACTION Data set Record Count = 3000
[INFO ] 2025-02-12 08:35:37.481 [pool-2-thread-2] DatasetReader - SAMPLE_TRANSACTION Data set Record Count = 4000
[INFO ] 2025-02-12 08:35:37.507 [pool-2-thread-2] DatasetReader - Total Record count = 4645 | 51228 Record Count = 0 | 2401 Record Count = 4645 | Unknown Record Count = 0
[INFO ] 2025-02-12 08:35:38.435 [zDIHThread0] AccountRecordProcessor - Total Sample_account Record Count: 491 | Sample_account Processed Record Count: 491
[INFO ] 2025-02-12 08:35:38.436 [zDIHThread1] AccountRecordProcessor - Total Sample_account Record Count: 509 | Sample_account Processed Record Count: 509
[INFO ] 2025-02-12 08:35:38.441 [zDIHThread0] TransactionRecordProcessor - Total Sample_transaction Record Count: 2327 | Sample_transaction Processed Record Count: 2327
[INFO ] 2025-02-12 08:35:38.441 [zDIHThread1] TransactionRecordProcessor - Total Sample_transaction Record Count: 2318 | Sample_transaction Processed Record Count: 2318
Management Center - viewing data
Now that the data has been loaded, we can go back to the MC to view and query it. To verify that your data has made it into your cluster, navigate to: the left side panel of the MC > Storage section > Maps. Here we can see two maps (tables) have been created and loaded with data. Note that their names are different from the default values SAMPLE_ACCOUNT and SAMPLE_TRANSACTION as we designated them with custom names for this example:

From here, you can additionally select either of the maps (by simply clicking on their name) to see additional details about them, including entry count and memory sizes, as well as map statistics like throughput and latency of inserting data (PUTs) and reading data (GETs).


If you’d like to go the extra mile to see the statistics change in real-time, you can clear the data in each of the maps by clicking the red CLEAR DATA button (shown in the previous screenshot) and then re-run the script while observing the map details page.

Management Center - Querying Data
The IBM zDIH Management Center includes a SQL Browser feature that allows you to query the zDIH caches, and is located in the top-right of the toolbar menu.

Once you're in the SQL Browser, you should see a screen that looks like the screenshot below:

The Queryable objects (left side menu) should include the cache names that you have already created. If the caches are not appearing, you would need to use the Connector Wizard to add them. Details for how to use the Connector Wizard can be found on the SQL Browser in IBM® zDIH Management Center section of the zDIH documentation website.
In this case, the caches were already available, so we can start writing our query by clicking either the Compose New Query button (highlighted in image above) or the + button near the home button at the top. This will bring us to a vanilla SQL form:

From here, if you’re familiar with SQL you can easily start writing your query manually, or you can add the cache name from the Queryable objects list. The easiest way to do this is to highlight the 3 dots in the editor and then hover over the name of the cache you want to query, then select add.


Next, just click on the EXECUTE QUERY button to run the SQL query. The results will be listed in tabular form in the Query Results table at the bottom of the page:

Please note that results are limited to 1000 rows by default to reduce the time for returning the results to the browser. They can be displayed on the Query Results table in groups of 25, 50 or 100. Additionally, you can export the results, for example as CSV or JSON files.

Conclusion
In this blog we’ve discussed a great deal of topics, giving you a complete view of how to load, view, and query data in your zDIH clusters using the zDIH Management Center. If you’ve made it this far without any questions or issues, congratulations! If you have issues with any of the steps discussed here, or if you want more information on what else zDIH can do, always remember to check the official zDIH documentation. Thank you for taking the time to read this blog!
References
- zDIH Official Documentation
- Previous zPET blogs on zDIH
Authors
Trent Balta (Trent.Balta@ibm.com)
Kieron Hinds (kdhinds@us.ibm.com)