AIX

AIX

Connect with fellow AIX users and experts to gain knowledge, share insights, and solve problems.

 View Only
  • 1.  HACMP two node cluster with SAN storage mirror

    Posted Mon April 07, 2014 06:54 AM

    Originally posted by: Odil


    Hi All,

    Having HACMP two-node cluster with two SAN storages mirrored using LVM.  No Cluster Sites defined. Both nodes connected to both SAN storages

    First issue is with Heart beat network. Configured 2 disk heartbeat networks - 1 per each SAN storage. While performing redundancy tests once one of SAN storage is down - cluster is going to ERROR state with errors for heartbeat network down. What are the guidelines to configure heartbeat network in such environment.

    Second issue. While performing redundancy tests and turn off same time working server + one of SAN Storage Resource group is not coming up with the following error message

    +MAIN_Rg:cl_sync_vgs[191] lqueryvg -g 00c5e65700004c0000000143a479f069 -L
    +MAIN_Rg:cl_sync_vgs[191] cut -f2- '-d '
    +MAIN_Rg:cl_sync_vgs[192] read lv_name stale_count
    +MAIN_Rg:cl_sync_vgs[193] (( 1 != 3 ))
    +MAIN_Rg:cl_sync_vgs[195] [[ high == high ]]
    +MAIN_Rg:cl_sync_vgs[195] set -x
    +MAIN_Rg:cl_sync_vgs[197] : This logical volume has stale partitions, so sync it.
    +MAIN_Rg:cl_sync_vgs[198] : Doing 4 stale partitions at a time seems to be a
    +MAIN_Rg:cl_sync_vgs[199] : win most of the time. However, we will honor the
    +MAIN_Rg:cl_sync_vgs[200] : NUM_PARALLEL_LPS value in /etc/environment, if set.
    +MAIN_Rg:cl_sync_vgs[202] grep ^NUM_PARALLEL_LPS= /etc/environment
    +MAIN_Rg:cl_sync_vgs[202] NPL_VAR=''
    +MAIN_Rg:cl_sync_vgs[203] [[ 1 == 0 ]]
    +MAIN_Rg:cl_sync_vgs[213] cl_log 999 'Warning: syncvg can take considerable amount of time, depending on data size and network
     bandwidth.'
    +MAIN_Rg:cl_log[+50] version=1.10
    +MAIN_Rg:cl_log[+94] SYSLOG_FILE=/var/hacmp/adm/cluster.log
    ***************************
    Apr 5 2014 19:17:02 !!!!!!!!!! ERROR !!!!!!!!!!
    

    Please help! Thank you in advance!

     



  • 2.  Re: HACMP two node cluster with SAN storage mirror

    Posted Wed April 09, 2014 05:47 AM

    Originally posted by: tech100



    is the VG synced before you down one storage box?
    is each LV copy reside on different storage box (you should consider configuring storage POOLs for all disks involved in the VG) - eg. storage pool STPOOL1 with disks from storage box1 and STPOOL2 with disks from storage box2
    also check allocation policy if is ok to guarantee wrong allocation.

    check if VG disk quorum is OFF (should be in mirrored VG).
     



  • 3.  Re: HACMP two node cluster with SAN storage mirror

    Posted Wed April 09, 2014 08:40 AM

    Originally posted by: Odil


    Hi tech100,

    1) Yes, VGs synced and closed before storage box was down

    2) Each LV copy reside on different storage box. - When you say storage pool do you mean using a Cluster sites?

    3) VG disk quorum is OFF - Disabled

    4) What do u mean under allocation policy? What it should be?

     

    Thanks!



  • 4.  Re: HACMP two node cluster with SAN storage mirror

    Posted Thu April 10, 2014 06:00 AM

    Originally posted by: tech100


    sorry, I mean mirror pool feature introduced in AIX 6.1

    btw. does command like below shows no LV having copy1 on a disk which is used for copy2 in another LV?

    lsvg -l vgname|sed '1,2d'|awk '{print $1}'|xargs -I{} lslv -m {}|grep ^[[:digit:]]|awk '{print $3,$5}'|sort -u
    


    I was thinking of "s" strict allocation policy:
     

                  s
                       Sets a super strict allocation policy, so that the partitions allocated for one mirror cannot share a physical volume with the partitions from
                       another mirror.
    


    do you use diskhb communication lines over both storage box disks?

    how about other communication lines (eg. ether) - are they ok?

    you can check some comm stats also using:

    lssrc -ls topsvcs



  • 5.  Re: HACMP two node cluster with SAN storage mirror

    Posted Thu April 10, 2014 08:55 AM

    Originally posted by: Odil


    Hi Tech100,

    1)The command gives me ONLY Physical volume names - e.g. ALL LVs are mirrored.
    # lsvg -l oravg|sed '1,2d'|awk '{print $1}'|xargs -I{} lslv -m {}|grep ^[[:digit:]]|awk '{print $3,$5}'|sort -u
    hdisk3 hdisk4
    

    This is a good guess probably regarding Strict allocation policy. First of all, I can see as this that I have to set my forced varyon setting to TRUE- which is currently set to FALSE. But how I can check current allocation policy settings? Can you please suggest?

    http://publib.boulder.ibm.com/infocenter/aix/v6r1/index.jsp?topic=%2Fcom.ibm.aix.hacmp.geolvm%2Fha_glvm_quorum_forced_varyon.htm

    As of over networks. As I mentioned. Since I'm running a test case there one active node is fully down and 1 storage system is down so I have 2 networks down. This is I can clearly see in the statistics from topsvcs.

    Thanks!



  • 6.  Re: HACMP two node cluster with SAN storage mirror

    Posted Thu April 17, 2014 02:02 AM

    Originally posted by: Odil


    Hi Tech100,

    Logged here to say thanks and provide resolution details for reference.

    1) You were right and I had a problem with one of volume groups logical volumes not fully mirrored

    2) In general HACMP expects that mirrored volume group that is being varied to have Mirror Pool Strictness to be set as Superstrict

    3) In case its not set as it there is a need to set Force varyon volume groups on Resource group start

    Thanks!