Automation with Power

Power Business Continuity and Automation

Connect, learn, and share your experiences using the business continuity and automation technologies and practices designed to ensure uninterrupted operations and rapid recovery for workloads running on IBM Power systems. 


#Power
#TechXchangeConferenceLab

 View Only
  • 1.  HACMP failover .... rpc.lockd

    Posted Fri November 19, 2010 05:57 AM

    Originally posted by: Bonzodog


    We are having an issue with a new installed HACMP system which, from being perfectly happy during testing, is now not behaving itself during failover.

    There are a number of weird things going on which I will work through:
    1) During resource release from machine A during resource failover to Machine B the rpc.lockd daemon is not dying. Looking at hacmp.out I can see repeated entries such as

    +PRIMES3_RG:rg_move_complete+271 [ 24 -gt 0 ]
    +PRIMES3_RG:rg_move_complete+267 lssrc -s rpc.lockd
    +PRIMES3_RG:rg_move_complete+267 LC_ALL=C
    +PRIMES3_RG:rg_move_complete+267 grep stopping
    rpc.lockd nfs 294926 stopping
    +PRIMES3_RG:rg_move_complete+267 [ 0 = 0 ]
    +PRIMES3_RG:rg_move_complete+270 +PRIMES3_RG:rg_move_complete+270 expr 24 - 1
    COUNT=23
    +PRIMES3_RG:rg_move_complete+271 sleep 1

    being repeated until the counter inexorably gets to zero and the node goes into error.

    Running the "Recover From HACMP Script Failure" comman from smit causes pretty well immediate failover!

    2) During cluster start on Machine A as the active node (this doesn't happen on B) only file systems which have been marked for NFS export (within the cluster resources) have actually been mounted!

    This all points to NFS issues.

    What has changed?

    Tivoli has been installed since we performed all the tests!

    Anyone got any idea (apart from remove Tivoli!) what on earth to do?
    #PowerHAforAIX
    #PowerHA-(Formerly-known-as-HACMP)-Technical-Forum


  • 2.  Re: HACMP failover .... rpc.lockd

    Posted Sat November 20, 2010 03:01 AM

    Originally posted by: SystemAdmin


    explain more of your nfs setup. why do you want to stop rpc.lockd (or other nfs daemons)?
    #PowerHAforAIX
    #PowerHA-(Formerly-known-as-HACMP)-Technical-Forum


  • 3.  Re: HACMP failover .... rpc.lockd

    Posted Sun November 21, 2010 08:05 AM

    Originally posted by: Bonzodog


    Not being at work this is from memory...

    We have a large number of NFS shares that are mounted on a series of Solaris 8 systems. The idea for stopping NFS is to enable
    a cleaner failover during cluster changes.
    #PowerHA-(Formerly-known-as-HACMP)-Technical-Forum
    #PowerHAforAIX


  • 4.  Re: HACMP failover .... rpc.lockd

    Posted Mon November 22, 2010 08:40 AM

    Originally posted by: SystemAdmin


    So your cluster will act as an HA NFS Server?
    the cluster software will do magic behind your back in order to even take over NFS locks clients have established on one node. you should not interfere by trying to start or stop services on your own.

    read the hints on nfs exports in the manual (using the /usr/es/.../exports file, using separate log volumes for cluster-exported filesystems).
    #PowerHAforAIX
    #PowerHA-(Formerly-known-as-HACMP)-Technical-Forum