Db2 Tools for z/OS

 View Only

Installing and upgrading IBM Db2 Query Monitor CAE web server in z/OS Unix

By Jørn Thyssen posted Thu August 31, 2023 05:26 AM

  

Installing and upgrading IBM Db2 Query Monitor CAE web server in z/OS Unix

Introduction

The IBM Db2 Query Monitor CAE server offers many advantages and features, such as web-based UI, consolidation of performance data from Db2 data sharing group members, alerting, automated actions, graphing, and much more.

The CAE server is a Java  based server that can run under z/OS Unix or on a windows server. As it is Java-based it will exploit zIIP processors if available.

Many customers prefer to install the CAE server under Unix System Services as installing on a windows server requires provisioning a windows server, maintaining it (patching, upgrades, etc.), as well as additional network setup (firewalls). Typically, such work is carried out by separate groups in the enterprise making the installation and maintenance more complicated.

This series contains several blogs:

  • Installing and upgrading the Query Monitor CAE server in z/OS Unix (this blog)
  • Configuring the Query Monitor CAE server (future blog)
  • Query Monitor CAE server operations (future blog)
  • Using certificates with Query Monitor CAE server in z/OS Unix (future blog)
  • Using AT-TLS for internal Query Monitor CAE communication (future blog)

This blog assumes that you are installing in z/OS Unix. For Windows installation please see the manual.

The diagram below shows the CAE-related components:

 

CAE related components

There will be a CAE agent on every LPAR in sysplex where you have a Db2 Query Monitor collector installed and running.

For a basic install the CQM CAE server is installed on a single LPAR, but some customers may install on shared zFS behind a DVIPA to be able to start the server on any LPAR in the sysplex, or you can also run a second hot stand-by CAE server on a different LPAR.

zFS considerations

As mentioned in the introduction the CAE server is Java based hence run in z/OS Unix.

The components spread across three different zFS paths. Normally, all three will be shared zFS, so the CAE server can be started from any LPAR in the sysplex.

Binary file path

Default path name: /usr/lpp/IBM/cqm/v3r3

The files  in this directory are maintained by SMP/E during the SMP/E apply process and can be mounted R/O during operation.

Important:

There are three SMP/E DDDEF PATHs: SCQMBIN, SCQMLIB and SCQMCLS.

DO NOT DELETE the trailing “IBM” in the paths as this leads to errors during SMP/E apply processing.

ADD DDDEF(SCQMBIN)                               
  PATH('/usr/lpp/IBM/cqm/v3r3/bin/IBM/')         .                                         

ADD DDDEF(SCQMLIB)
  PATH('/usr/lpp/IBM/cqm/v3r3/lib/IBM/')          .                                        

ADD DDDEF(SCQMCLS)
  PATH('/usr/lpp/IBM/cqm/v3r3/classes/IBM/') 

CQM_VAR_HOME path

Default: none

Example: /var/cqm

This path contains metadata, config, customizations, user preferences, and much more. 

Important: The directory cannot be shared among multiple servers. Each server should have its  own directory.

Important: These files are maintained by two jobs that are generated by the IBM Tools Customizer during the customization process. Sample jobs can also be found in HLQ.SCQMSAMP.

Job #1: TCz job C0CUNPX (or HLQ.SCQMSAMP(CQMCUNPX))

This job installs the GA (General Availability) version of the file system. This job only needs to be run once during initial installation or if you want to start over later. The GA version is stored in HLQ.SCQMTRAN.

Job #2: TCz job C1CUPPT (or HLQ.SCQMSAMP(CQMCUPPT))

This job installs cumulative fixes into the file system. The fix itself is stored in HLQ.SCQMTRAN. 

Run this job after you have applied maintenance: stop the CAE server; run the job; correct any file permission issues (see below); restart the CAE server.

Important: The directory cannot be moved from environment to environment as it contains files specific to the environment. For example, if you copy the zFS from dev to prod, then you will overwrite your production alert setup, your alert dashboard, and other configuration items.

Hence the C1CUPPT / CQMCUPPT job must be run on each environment where you have the CAE server installed after you have activated new maintenance.

Important: check the job outputs for any permission errors or other failures

Important: The zFS must be R/W during maintenance and the CQM CAE started task user must have R/W to all files and directories. If you have run the job C1CUPPT / CQMCUPPT under a different user id, you must either change ownership of the files, e.g.,

chown -R <cqmstcuser> <CQM_VAR_HOME>

or make the files writeable for the STC user, e.g.,

chgrp -R <cqmstcgroup> <CQM_VAR_HOME>

chmod -R g+rwX <CQM_VAR_HOME>

These commands will have to be run as root.

z/OS Unix Log path

The third and final zFS path is the z/OS Unix log path.

We recommend that you write the CAE server output to SYSOUT:

//STDOUT DD SYSOUT=*    

This allows you to see the server output in the JES2 spool, and additionally it makes the CAE server run in the foreground, so you can see the started task as active in SDSF (System Display Search Facility) ST. Otherwise the server will run in the background, the JCL proc will complete, and you can only find the server in SDSF DA or SDSF PS.

But if you prefer, you can have log files written to zFS files:

//STDOUT DD PATH='/var/cqm/logs/cae_server.log',
//            PATHOPTS=(OWRONLY,OCREAT,OAPPEND),     
//            PATHMODE=(SIRWXU,SIRWXG,SIRWXO)        

Some customers use the same zFS as for CQM_VAR_HOME, but other customers prefer a separate zFS for the log files, so any out of space conditions does not affect CQM_VAR_HOME. 

As mentioned, if you use SYSOUT=* no log files are written and the output goes to the JES2 spool instead.

To summarize:

Example               Maintained by Mount attributes during operation CAE server started task permissions
z/OS Unix binary path  /usr/lpp/IBM/cqm/v3r3 SMP/E apply R/O Read
VAR HOME path
(CQM_VAR_HOME)
/var/cqm 

C0CUPPX (initial install)

C1CUPPT (apply cumulative maintenance)

R/W R/W

z/OS Unix log path(optional)

/var/cqm/logs N/A R/W R/W

                             

 

Configuration and zFS paths

TCz will generate sample jobs and configuration files for the CAE server. Those samples will be customized according to your input in TCz.

For reference, the zFS paths are referenced in the JCL proc and CAE server configuration (see examples below).

z/OS Unix Binary Path

The z/OS Unix binary path is referenced BPXBATCH PARM in the JCL proc for starting and stopping the CAE server: 

//CQM33SVS
//*MESSAGE STARTING CQM USS CAE SERVER                       
//SERVER    EXEC PGM=BPXBATCH,REGION=800M,TIME=NOLIMIT,      
// PARM='SH /usr/lpp/IBM/cqm/v3r3/bin/start_cae_server'              
//STDOUT DD SYSOUT=*                                         
//STDENV   DD DSN=HLQ.CQMPARM(CAESRVIN),          
//            DISP=SHR                                        

VAR HOME path

The CQM_VAR_HOME path is referenced in the STDENV DD dataset used in the JCL procedure to stop and start the CAE server.

Configuration of the CAE server and agents will be covered in a future blog, but the sample below shows the reference to the VAR HOME path

CQM_JDBC_PORT=9446
CQM_HEAP=600                                                       
CQM_VAR_HOME=/var/cqm                                  
CQM_JAVA=/usr/lpp/java/IBM/J8.0_64/                                  
CQM_CAE_AGENT_LISTENER_PORT=3448                                    
CQM_HTTPS_PORT=9444                                                
CQM_CAE_KEYSTORE_TYPE=RACF                                         
CQM_CAE_TRUSTSTORE=safkeyring:///CQMring                           
CQM_CAE_KEYSTORE=safkeyring:///CQMring                              
CQM_WEB_KEY_ALIAS=Plex3certificate
        

z/OS Unix log path

If you prefer log files in z/OS Unix, then the z/OS Unix log path is referenced in the JCL procedure for starting and stopping the CAE server. See discz/OS Unixion in the “z/OS Unix log path” section earlier in this blog.

Typical install process

For a new install of the CQM CAE server the high-level process:

  1. z/OS Unix Binary Path
    1. Run SMP/E RECEIVE and APPLY to lay down the z/OS Unix binary path as well as updating SCQMTRAN containing the GA version and cumulative fix for VAR HOME
    2. Make sure you clone both the zFS and SCQMTRAN from your SMP/E environment to the target systems
  2. VAR Home Path
    1. Run the TCz job C0CUNPX to install GA version
    2. Run the TCz job C1CUPPT to install cumulative fix
    3. Fix z/OS Unix permissions, so CQM CAE started user has R/W
    4. For an initial install you may clone this zFS to each target system, or you may run both jobs on each target system.
      Do not clone this zFS for upgrades, as you will overwrite alert configurations, stored alerts, user preferences, and much more. 

Typical upgrade process

To upgrade an already running CAE server: 

  1. Make sure the CAE server is stopped while upgrading the z/OS Unix files
  2. z/OS Unix Binary Path
    1. Run SMP/E RECEIVE and APPLY to update z/OS Unix binary path as well as updating SCQMTRAN with the latest cumulative fix for VAR HOME
    2. Make sure you clone both the zFS and SCQMTRAN from your SMP/E environment to the target systems
  3. VAR Home Path
    1. Run the TCz job C1CUPPT to install cumulative fix
    2. Fix z/OS Unix permissions, so CQM CAE started user has R/W
    3. Run C1CUPPT on each system where the CAE server is installed.
      Do not clone this zFS from another system, as you will overwrite alert configurations, stored alerts, user preferences, and much more. 

0 comments
18 views

Permalink