Db2

 View Only

Setting up High Availability for IBM Data Management Console with Pacemaker

By Thuan Bui posted Wed June 08, 2022 03:38 AM

  

High Availability is normally set up for production systems with multiple server nodes to mitigate system downtime via fail-over. This blog describes how to set up Db2 IBM Data Management Console (DMC) high availability (HA) environment with Pacemaker that provides detection of and recovery from DMC server failure.

Architecture

Hostname

External IP

Set up role

prudery1.fyre.ibm.com

9.30.217.20

Primary

prudery2.fyre.ibm.com

9.30.217.21

Standby

 

9.30.217.255

Virtual IP


Environment

The following products/technologies are used:

  • Redhat
  • Data Management Console
  • Pacemaker & Corosync
  • pcs (pacemaker configuration system)
  • inotify/inotifywait/rsync


Overview

  1. In this simple HA environment, two nodes are configured for DMC servers along with a Virtual IP (VIP) to be used as floating IP.  All IPs should be in the same subnet to avoid additional port routing.
  2. DMC and Pacemaker are installed on both nodes. 
  3. Both DMC servers are sharing the same repository DB but only one of the DMC servers should be up (online) at a time, e.g., in active/passive mode.  The repository should be hosted on another independent node.
  4. The VIP is used as part of the URL to access the console no matter which DMC server is online. 
  5. Pacemaker periodically monitors the status of the online DMC server (via status.sh script) and when it detects the DMC server is down, it attempts to restart the DMC (via restart.sh).  If DMC cannot be started within a specific time, it switches (fails over) to the other node and brings up DMC server on that node.
  6. All the timings mentioned above are configurable in dmc resource definition as part of the set up.
  7. DMC configuration data is synchronized in real time from the online server to the offline server (via inotify/inotifywait and rsync) so that when failover occurs, the server continues to work seamlessly with the same set of config data.  Logs are periodically copied from online server to offline server (via crontab and rsync) for troubleshooting after failover if needed.  Two scripts (datasync and syncup.sh) are provided to set up and handle the data synchronization.


Notes:

  1. In the following set up steps, replace the data with your environment info such as host names, IP addresses, path to installed DMC, etc.
  2. Throughout this doc, the terms online, primary, active, and offline, secondary, standby, passive are interchangeably used, respectively.


Setup

Set hostnames

  • If needed, on each node, edit the hostname if they have different hostname, for example:

                hostnamectl --static --transient set-hostname prudery2.fyre.ibm.com

  • On each node, edit /etc/hosts to map the hostname of all servers, for example:


Set up synchronization between primary (online) and secondary (offline) nodes

  • On both nodes, install rsync and inotify-tools.

                yum install rsync inotify-tools

  • Confirm that inotifywait exists by running the following commands:

                updatedb

                locate inotifywait

Note down the location of inotifywait, for example, /usr/local/bin/inotifywait as this location path will be specified later in the syncup.sh script.

  • Set up password-less SSH for root between nodes to enable rsync to copy files from one server to another without password prompt.
    • Run the ssh-keygen command on both the nodes.

                            ssh-keygen -t rsa

    • Either manually copy the contents of the output file from one node and append it to the /<user>/.ssh/authorized_keys file on the other node or run this command to copy the file to the other node:    

                           ssh-copy-id -i ~/.ssh/id_rsa.pub <user>@<other-node>


For more information on DMC data synchronization, refer to the similar step for HA set up with TSAMP (Tivoli System Automation for Multiplatform):

https://www.ibm.com/docs/en/db2-data-mgr-console/3.1.x?topic=availability-synchronizing-primary-secondary-nodes


Set up Pacemaker

Installation
  • On each node in the cluster, install the Red Hat High Availability Add-On software packages along with all available fence agents from the High Availability channel.

                         yum install pcs corosync pacemaker fence-agents-all

  • Set a password on each node for the user ID hacluster, which is the default pcs administration account created by the pcs install. It is recommended that the password for user hacluster be the same on both nodes.

                         passwd hacluster

  • On each node in the cluster, execute the following commands to start the pcsd service (a pcs daemon which operates as a remote server for pcs) and to enable it at system start:

                         systemctl start pcsd.service

                         systemctl enable pcsd.service

  • On the node(s) from which you will be running pcs commands, authenticate the pcs user hacluster.  Enter username “hacluster” and password when prompted.

                        pcs host auth prudery1.fyre.ibm.com prudery2.fyre.ibm.com


Cluster creation
  • Create the two-node cluster named “my_cluster” that consists of nodes prudery1.fyre.ibm.com and prudery2.fyre.ibm.com. This will propagate the cluster configuration files to both nodes in the cluster.

                   pcs cluster setup –start my_cluster prudery1.fyre.ibm.com prudery2.fyre.ibm.com

  • Enable the cluster services to run on each node in the cluster when the node is booted.

                   pcs cluster enable –all

                   pcs cluster status


Fencing Configuration

Fencing device isn’t configured in this example.
To disable Fencing, run this command to avoid any error/warning:
        pcs property set stonith-enabled=false

 

Creating resource and resource group 

Create scripts for file synchronization using the scripts provided at the end

  • On both nodes, create a script named datasync. The script contains the start, stop, and status actions along with enabling or disabling the data synchronization process. Save the script in a directory of your choice.

 For detailed contents, refer to the datasync script.

  • On both nodes, create a script named syncup.sh and save it to the same directory as the file datasync.

Replace the value that is assigned to destIP with the IP address of the pair server. Specifically, on the primary node, destIP is the IP address of secondary node; and on the secondary node, destIP is the IP address of the primary node.

For detailed contents, refer to the syncup.sh script.

Change file permission of both files  
      chmod +x datasync
      chmod +x syncup.sh

 
Create DMC OCF

Create file dmc to be used as resource agent that manages high availability and failover for the DMC server.  This includes start, stop, and monitor functions.
On both nodes, put the file to /usr/lib/ocf/resource.d/heartbeat/
Change file permission
      chmod +x dmc

 For detailed contents, refer to file dmc.

 

Create resource
  • Create virtual IP resource with the name ‘vip’ using the virtual IP address

                pcs resource create vip ocf:heartbeat:IPaddr2 ip=9.30.217.255

  • Verify virtual IP take effect

               Start DMC on any node
               Verify you can access DMC via URL: http://9.30.217.255:11080     

  • Create dmc and specify values to the required parameter pinstallpath, sinstallpath, datasyncpath.  These are the paths to where DMC server was installed on the primary, secondary nodes, and sync scripts are placed, respectively.  Change these paths accordingly based on your environment.

pcs resource create dmc ocf:heartbeat:dmc pinstallpath="<dmc_installed_path>/ibm-datamgmtconsole" sinstallpath="<dmc_installed_path>/ibm-datamgmtconsole" datasyncpath="<synch_scripts_path>"

For example,
pcs resource create dmc ocf:heartbeat:dmc pinstallpath="/usr/local/src/ibm-datamgmtconsole" sinstallpath="/usr/local/src/ibm-datamgmtconsole"                    datasyncpath="/data/syncup"

 
Create resource group

To ensure these resources all run on the same node, they are configured as part of the resource group.

                        pcs resource group add dmcgroup vip dmc


For more information, refer to https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-startup-haaa

 

Data types to be synced

Type

Path

Frequency

Directory

/Config

Real time

Directory

/logs

Periodic (every 5 mins)

Directory

/wlp/usr/servers/dsweb/resources/security

Real time

File

/wlp/usr/servers/dsweb/bootstrap.properties

Real time

File

/wlp/usr/servers/dsweb/server.env

Real time

File

/wlp/usr/servers/dsweb/jvm.options

Real time

File

/addons/drs/drs-agent/.env

Real time

File

/addons/drs/drs-agent/config.yaml

Real time

Directory

/addons/drs/drs-agent/insightdb

Real time

Directory

/addons/drs/drs-agent/logs

Periodic (every 1 hour)

File

/addons/job-scheduler/config/config.json

Real time

Directory

/addons/job-scheduler/logs

Periodic (every 1 hour)

Note that you can change the copy period of log directories as needed in the syncup.sh script. For more info on the data types to be synced, refer to this doc:

https://www.ibm.com/docs/en/db2-data-mgr-console/3.1.x?topic=multiplatforms-data-types-be-synced

Testing the HA setup

During the test, you can use either the command line or the PCSD GUI where applicable.  Make sure that DMC is running on the primary node and has been stopped on the standby node.


Data synchronization

  • Real time update from online to offline: update any file in the /Config directory and verify that it is updated to offline node in real time.  For example,
    • On both nodes, view contents of admin_log.properties and check the value of MaxFileSizeinM:

 vi /usr/local/src/ibm-datamgmtconsole/Config/admin_log.properties

    • Change MaxFileSizeinM in admin_log.properties on active node to some value, e.g., 10
    • Verify that value is also changed in the admin_log.properties on the passive node.

  • Update from offline not synced to online node
    • Reverse the order of where the change occurs by changing the value in the file on the passive node and
    • Verify that value is not changed in the same file on the active node.
  • Check that logs folders are periodically synced/copied to the offline node.
    • On the active node, run this command:

                            cat /var/log/cron | grep -E 'job-scheduler|drs-agent'

    • In this crontab log, you should see logs for first minute of every hour, there are 2 rsyncs to copy logs for drs-agent and job-scheduler from active to passive node.


Auto recover/restart DMC on active node

  • Manually stop DMC
    • On the online node, run the stop.sh script to stop the server.
    • Run the status.sh script few times to verify that the server is stopped and then automatically restarted within couple minutes.
  • Kill DMC process
    • Look for DMC process:

                            ps ax | grep -v “grep” | grep java

    • kill -9 <process ID>
    • Run the status.sh script few times to check that the server is stopped and then automatically restarted within couple minutes.


Control failover to passive node

  • On both nodes, verify which one is online and offline by running:

                pcs status
                status.sh

  • You can also run the following commands

                crontab -l

                ps ax | grep -v "grep" | grep inotifywait

             There should be 3 crontabs and one inotifywait on the online node, and none of these on the offline node.

  • Manually put online mode to standby:

               pcs node standby <online_node_url>
            For example, pcs node standby prudery1.fyre.ibm.com

  • On both nodes, verify that the DMC server is switched from one node to the other, e.g., DMC is stopped on the online node and started on the standby node.
  • Run the following command to turn primary node back to normal to continue to the next test.

                pcs node unstandby prudery1.fyre.ibm.com 


Auto failover to passive node

  • Via stopping current active DMC node.
    • Run this command on the active node

                            pcs cluster stop <online_node_URL>

    • On both nodes, check the cluster and DMC server status to verify the DMC server fails over to the other standby node
    • Restart the stopped node to put the cluster back to normal to continue to the next test.

                            pcs cluster start <offline_node_URL>

  • Via rebooting the active node.
    • Run this command on the active node:

                            reboot

    • While the active node is being rebooted, DMC server fails over to the other offline node.



Further test

Set up events or tasks such as alerts, jobs, blackout on the primary server and verify that these continue to work after the failover.



Provided scripts


dmc:

#!/bin/sh
#
# dmc
# Resource agent that manages high availability/failover of 
# IBM Db2 Data Management Console
# 
# Copyright (c) 2004 SUSE LINUX AG, Lars Marowsky-Bree
#                    All Rights Reserved.
#
# This program is free software; you can redistribute it and/or modify
# it under the terms of version 2 of the GNU General Public License as
# published by the Free Software Foundation.
#
# This program is distributed in the hope that it would be useful, but
# WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
#
# Further, this software is distributed without any warranty that it is
# free of the rightful claim of any third person regarding infringement
# or the like.  Any license provided herein, whether implied or
# otherwise, applies only to this software file.  Patent licenses, if
# any, provided herein do not apply to combinations of this program with
# other software, or any other product whatsoever.
#
# You should have received a copy of the GNU General Public License
# along with this program; if not, write the Free Software Foundation,
# Inc., 59 Temple Place - Suite 330, Boston MA 02111-1307, USA.
#

#######################################################################
# Initialization:

: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs

#######################################################################

PINSTALLPATH=""
SINSTALLPATH=""
DATASYNCPATH=""

meta_data() {
    cat <<END
<?xml version="1.0"?>
<!DOCTYPE resource-agent SYSTEM "ra-api-1.dtd">
<resource-agent name="dmc">
<version>1.0</version>

<longdesc lang="en">
Resource agent for Db2 Data Management Console to handle high availability and failover.
</longdesc>
<shortdesc lang="en">Resource agent for Db2 Data Management Console</shortdesc>

<parameters>
<parameter name="pinstallpath" unique="1" required="1">
<longdesc lang="en">
Install path of DMC console on Primary
</longdesc>
<shortdesc lang="en">Primary DMC Install path</shortdesc>
<content type="string" default="" />
</parameter>

<parameter name="sinstallpath" unique="1" required="1">
<longdesc lang="en">
Install path of DMC console on Standby
</longdesc>
<shortdesc lang="en">Standby DMC Install path</shortdesc>
<content type="string" default="" />
</parameter>

<parameter name="datasyncpath" unique="1" required="1">
<longdesc lang="en">
Path of scripts to synchronize DMC files
</longdesc>
<shortdesc lang="en">Script path</shortdesc>
<content type="string" default="" />
</parameter>

</parameters>

<actions>
<action name="start"        timeout="180s" />
<action name="stop"         timeout="90s" />
<action name="monitor"      timeout="20s" interval="5s" depth="0" />
<action name="meta-data"    timeout="5s" />
<action name="validate-all"   timeout="20s" />
</actions>
</resource-agent>
END
}

#######################################################################

dmc_usage() {
    cat <<END
usage: $0 {start|stop|monitor|validate-all|meta-data}

action:
  start         start DMC server
  stop          stop DMC server
  monitor       return status of DMC server
  meta-data     show meta data info
  validate-all  validate the instance parameters
END
}

dmc_start() {
    ocf_log info "$(date): starting dmc server ..."
    ${PINSTALLPATH}/bin/startup.sh >/dev/null 2>&1
    ocf_log info "$(date): dmc - server started"
    ${DATASYNCPATH}/datasync start ${PINSTALLPATH} ${SINSTALLPATH} >/dev/null 2>&1
    ocf_log info "$(date): dmc - datasync started"
    return $OCF_SUCCESS
}

dmc_stop() {
    ocf_log info "$(date): stopping dmc server ..."
    ${PINSTALLPATH}/bin/stop.sh >/dev/null 2>&1
    ocf_log info "$(date): dmc - server stopped"
    ${DATASYNCPATH}/datasync stop ${PINSTALLPATH} ${SINSTALLPATH} >/dev/null 2>&1
    ocf_log info "$(date): dmc - datasync stopped"
    return $OCF_SUCCESS
}

dmc_monitor() {
    ${PINSTALLPATH}/bin/status.sh	
    if [ $? -eq 1 ]
    then
      ocf_log info "$(date): dmc server is not running.  Return code $OCF_NOT_RUNNING."
      return $OCF_NOT_RUNNING
    else
      ocf_log info "$(date): dmc server is running.  Return code $OCF_SUCCESS."
      return $OCF_SUCCESS
    fi
}

dmc_validate() {
    check_parm_err=0

    # check required dmc installpath parameter
    if [ -z "$OCF_RESKEY_pinstallpath" ]
    then
        ocf_log err "Required Primary DMC parameter pinstallpath is not set!"
        #return $OCF_ERR_CONFIGURED
        check_parm_err=1
    fi

    # check required dmc installpath parameter
    if [ -z "$OCF_RESKEY_sinstallpath" ]
    then
        ocf_log err "Required Secondary DMC parameter sinstallpath is not set!"
        #return $OCF_ERR_CONFIGURED
        check_parm_err=1
    fi

    # check required datasync path parameter
    if [ -z "$OCF_RESKEY_datasyncpath" ]
    then
        ocf_log err "Required Path to datasync scripts parameter datasyncpath is not set!"
        #return $OCF_ERR_CONFIGURED
        check_parm_err=1
    fi

    if [ $check_parm_err -eq 1 ]
    then
    # $$$ Temp -  Set paths for testing by calling script directly
    PINSTALLPATH="/usr/local/src/ibm-datamgmtconsole"
    SINSTALLPATH="/usr/local/src/ibm-datamgmtconsole"
    DATASYNCPATH="/data/syncup"
    #DATASYNCPATH="/usr/local/bin/syncup"

    return $OCF_ERR_CONFIGURED
    fi  

    PINSTALLPATH="$OCF_RESKEY_pinstallpath"
    SINSTALLPATH="$OCF_RESKEY_sinstallpath"
    DATASYNCPATH="$OCF_RESKEY_datasyncpath"

    return $OCF_SUCCESS
}

case $__OCF_ACTION in
meta-data)  meta_data
        exit $OCF_SUCCESS
        ;;
start)  dmc_validate
        dmc_start
        ;;
stop)   dmc_validate
        dmc_stop
        ;;
monitor)    dmc_validate
        dmc_monitor
        ;;
validate)   dmc_validate
        ;;
esac
rc=$?
ocf_log debug "${OCF_RESOURCE_INSTANCE} $__OCF_ACTION : $rc"
exit $rc


datasync

#!/bin/bash
#######################################################################
# Initialization:
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
#######################################################################

OPSTATE_ONLINE=1
OPSTATE_OFFLINE=2
DATASYNCPATH="/data/syncup"
#DATASYNCPATH="/usr/local/bin"

Action=${1}
PINSTALLPATH=${2}
SINSTALLPATH=${3}
#DATASYNCPATH=${4}

case ${Action} in
        start)
          # remove crontab entries that scheduled periodical backup of /logs to offline node
          crontab -l | grep -v "ibm-datamgmtconsole" | crontab -
          #echo -e "$(date): datasync - crontab scheduled backup removed" >> /root/syncup/setup.log
          ocf_log info "$(date): datasync - crontab scheduled backup removed"
          # start synchronization of dmc data from primary to secondary node
          #nohup bash /root/syncup/syncup.sh >> /root/syncup/setup.log 2>&1 &
          nohup bash $DATASYNCPATH/syncup.sh $PINSTALLPATH $SINSTALLPATH >> $DATASYNCPATH/syncup.log 2>&1 &
          #echo -e "$(date): datasync - syncup started" >> /root/syncup/setup.log
          ocf_log info "$(date): datasync - syncup started"
          RC=0
          ;;

        stop)
          killall inotifywait
          #echo -e "$(date): datasync - inotify stopped" >> /root/syncup/setup.log
          ocf_log info "$(date): datasync - inotify stopped" 
          # remove crontab entries that scheduled periodical backup of /logs to offline node
          crontab -l | grep -v "ibm-datamgmtconsole" | crontab -
          #echo -e "$(date): datasync - crontab scheduled backup removed" >> /root/syncup/setup.log
          ocf_log info "$(date): datasync - crontab scheduled backup removed"
          RC=0
          ;;

        status)
          ps ax |grep -v "grep"| grep inotifywait>/dev/null
          if [ $? == 0 ]
          then
            if [  `crontab -l | grep "ibm-datamgmtconsole" | wc -l`  -eq "2" ]
            then
              RC=${OPSTATE_ONLINE}  
            else
              RC=${OPSTATE_OFFLINE}
            fi
          else
            RC=${OPSTATE_OFFLINE}
          fi
          # echo -e "$(date): datasync - status check.  RC="${RC} >> /root/syncup/setup.log
          ;;
esac
exit $RC


syncup.sh

#!/bin/bash
#######################################################################
# Initialization:
: ${OCF_FUNCTIONS_DIR=${OCF_ROOT}/lib/heartbeat}
. ${OCF_FUNCTIONS_DIR}/ocf-shellfuncs
#######################################################################

# *** Need to verify the path to inotify in your environment
#$$$ TBD $$$
inotifyDir="/usr/local/bin"
scriptDir="/data/syncup"
#scriptDir="/usr/local/bin"

# Source install path
#srcDir="/usr/local/src/ibm-datamgmtconsole"
srcDir=${1}

# *** Need to change destination info according to your environment, especially the IP
# $$$ TBD
destIP="9.30.217.21"
#destDir="/usr/local/src/ibm-datamgmtconsole"
destDir=${2}  # Target install path
destlogsDir="logsB"  # backup copy of logs folder

drSDir="/addons/drs/drs-agent"
jobDir="/addons/job-scheduler"

dir=""
action=""
file=""
subDir=""

# purge all logs
truncate -s 0 ${scriptDir}/syncup.log
truncate -s 0 ${scriptDir}/eventsync.log
truncate -s 0 ${scriptDir}/eventnotsync.log
truncate -s 0 ${scriptDir}/logsync.log

# module to sync file changes to other node
rsyncfile()
{
   if [[ $action == DELETE* ]]
     then
       echo -e "$(date) \n Warning: You've tried to delete important file $dir$file. It has been recovered from standby server." >> ${scriptDir}/eventsync.log
       rsync -avzP root@$destIP:$destDir$subDir$file $dir >> ${scriptDir}/eventsync.log
     else
       echo -e "\n $(date) Sync change of $action $dir$file to $destIP:$dir" >> ${scriptDir}/eventsync.log
       rsync -avzP --delete $dir$file root@$destIP:$destDir$subDir >> ${scriptDir}/eventsync.log
   fi    
}   

# Periodically copy/backup the logs file to the other node using crontab
#debug
echo -e "$(date): Setting up crontab for backing up $srcDir/logs/ to root@$destIP:$destDir/$destlogsDir/ and $srcDir$drSDir/logs/ to root@$destIP:$destDir/$drSDir/$destlogsDir/" >> ${scriptDir}/setup.log
# write out current crontab
crontab -l > cronlist 2>/dev/null
# Add new cron into temp cron file to make an exact copy of all files in logs directory inside the logsB directory for every 5 minutes
echo "*/5 * * * * rsync -avzP --delete $srcDir/logs/ root@$destIP:$destDir/$destlogsDir/ >> ${scriptDir}/logsync.log" >> cronlist
# Add another cron to backup logs directory in DrS to logsB directory in the other node for every hour at first minute
echo "1 */1 * * * rsync -avzP $srcDir$drSDir/logs/ root@$destIP:$destDir$drSDir/$destlogsDir/ >> ${scriptDir}/logsync.log" >> cronlist
# Add another cron to backup logs directory in job-scheduler to logsB directory in the other node for every hour at first minute
echo "1 */1 * * * rsync -avzP $srcDir$jobDir/logs/ root@$destIP:$destDir$jobDir/$destlogsDir/ >> ${scriptDir}/logsync.log" >> cronlist
#install new cron file
crontab cronlist
rm cronlist

# Wait for change events to the main directory and its subdirectory except the logs directory; and then process these change events
#debug
echo "$(date): Setting up inotifywait process" >> ${scriptDir}/setup.log
$inotifyDir/inotifywait --exclude 'logs' -rmq -e modify,create,delete,attrib,move ${srcDir}/ | while read event
  do
    # debug
    echo -e "\n $(date) $event" >> ${scriptDir}/syncup.log

    # parse event record which should contain the directory, followed by action and file
    dir=$(echo ${event}|cut -d ' ' -f1)
    action=$(echo ${event}|cut -d ' ' -f2)
    file=$(echo ${event}|cut -d ' ' -f3)
    subDir=${dir#*$srcDir}  # Extract sub directory after source install directory

    #debug
    echo -e "dir:$dir" >> ${scriptDir}/syncup.log  
    echo -e "action:$action" >> ${scriptDir}/syncup.log
    echo -e "file:$file" >> ${scriptDir}/syncup.log
    echo -e "subDir:$subDir" >> ${scriptDir}/syncup.log
    
    case "$subDir" in

       "/Config/"* | "/wlp/usr/servers/dsweb/resources/security/" | "$drSDir/insightdb/")
        if [[ $file == ""  ||  $file == .*  ||  $file == .swp  ||  $file == .swx ]]
        # not valid file to be synced
        then
          echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
        else 
          rsyncfile
        fi 
        ;;

      "/wlp/usr/servers/dsweb/")
        if [[ $file == "bootstrap.properties"  ||  $file == "server.env"  ||  $file == "jvm.options"  ]]
          then
            rsyncfile
          else
            echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
        fi
        ;;

      # File in DrS add-on directory
      "$drSDir/")
      #"/usr/local/src/ibm-datamgmtconsole/addons/drs/drs-agent/")
        if [[ $file == ".env"  ||  $file == "config.yaml"  ]]
          then
            rsyncfile
          else
              echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
        fi
        ;;

    # File in job-scheduler add-on directory
      "$jobDir/config/")
      #"/usr/local/src/ibm-datamgmtconsole/addons/job-scheduler/")
        if [[ $file == ".json" ]]
          then
            rsyncfile
          else
              echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
        fi
        ;;
    
      # Otherwise
      *)
      echo -e "\n *** Event not synced: $event " >> ${scriptDir}/eventnotsync.log 
      ;;

    esac

  done

Conclusion

This HA set up with Pacemaker enables DMC to auto-restart or failover to the other server node in the cluster which helps minimize access downtime to the console.  If needed, you can set up further with quorum and fencing.  You can also consider using logical volume or shared storage for DMC pertinent data instead of synchronization of data between the two nodes.

 

References:
  1. Configuring and Managing High Availability Clusters:

https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html-single/configuring_and_managing_high_availability_clusters/index

 

  1. Pacemaker Administration:

https://people.redhat.com/kgaillot/pacemaker/doc/2.1/Pacemaker_Administration/pdf/Pacemaker_Administration.pdf

 

  1. Setting up HA for DMC using TSAMP:

https://www.ibm.com/docs/en/db2-data-mgr-console/3.1.x?topic=administering-setting-up-high-availability

 

Please leave comments or suggestions.  You can reach me at tqbui@us.ibm.com.

Special thanks to Pei Pei Liang (liangpp@cn.ibm.com) for helping with additional set up and testing of Pacemaker and DMC.


#Db2

0 comments
57 views

Permalink