AIX

AIX

Connect with fellow AIX users and experts to gain knowledge, share insights, and solve problems.


#Power
 View Only
  • 1.  iostat: % tm_act

    Posted Fri December 18, 2009 05:31 AM

    Originally posted by: grukrz1


    hello,

    I have mirrored VG built of two LUNs (each LV in the VG is mirrored on these LUNs).

    iostat shows:

    hdisk2 xfer: %tm_act bps tps bread bwrtn
    17.3 1.6M 239.7 226.1K 1.4M
    read: rps avgserv minserv maxserv timeouts fails
    8.0 7.4 0.1 4.0S 0 0
    write: wps avgserv minserv maxserv timeouts fails
    231.8 1.3 0.1 5.8S 0 0
    queue: avgtime mintime maxtime avgwqsz avgsqsz sqfull
    1.4 0.0 9.5S 0.1 0.0 52.4
    hdisk3 xfer: %tm_act bps tps bread bwrtn
    44.2 1.6M 242.1 237.2K 1.4M
    read: rps avgserv minserv maxserv timeouts fails
    10.3 8.7 0.1 3.7S 0 0
    write: wps avgserv minserv maxserv timeouts fails
    231.7 3.7 0.1 6.7S 0 0
    queue: avgtime mintime maxtime avgwqsz avgsqsz sqfull
    2.6 0.0 6.7S 0.1 0.1 56.7
    What can be the reason that %tm_act is much higher for hdisk3?

    Are sqfull statistics ok or maybe queue_depth should be adjusted (currently queue_depth is 8 for the LUNs from hitachi storage we use).
    thx in advance,
    K.
    #AIX-Forum


  • 2.  Re: iostat: % tm_act

    Posted Fri December 18, 2009 05:33 AM

    Originally posted by: grukrz1


    formatted iostat output:

    
    hdisk2         xfer:  %tm_act      bps      tps      bread      bwrtn 17.3      1.6M   239.7      226.1K       1.4M read:      rps  avgserv  minserv  maxserv   timeouts      fails 8.0      7.4      0.1      4.0S          0          0 write:      wps  avgserv  minserv  maxserv   timeouts      fails 231.8      1.3      0.1      5.8S          0          0 queue:  avgtime  mintime  maxtime  avgwqsz    avgsqsz     sqfull 1.4      0.0      9.5S     0.1        0.0        52.4 hdisk3         xfer:  %tm_act      bps      tps      bread      bwrtn 44.2      1.6M   242.1      237.2K       1.4M read:      rps  avgserv  minserv  maxserv   timeouts      fails 10.3      8.7      0.1      3.7S          0          0 write:      wps  avgserv  minserv  maxserv   timeouts      fails 231.7      3.7      0.1      6.7S          0          0 queue:  avgtime  mintime  maxtime  avgwqsz    avgsqsz     sqfull 2.6      0.0      6.7S     0.1        0.1        56.7
    

    #AIX-Forum


  • 3.  Re: iostat: % tm_act

    Posted Fri December 18, 2009 05:33 AM

    Originally posted by: SystemAdmin


    hdisk3 is doing more work in terms of reads than hdisk2, although write wise they are doing the same which would be expected of mirrors. What is the hardware behind the LUN's and what queue_depth is set for the hdisks? I note there is a value in sqful and was wondering if this is consistently there?

    Thanks,
    Sam
    #AIX-Forum


  • 4.  Re: iostat: % tm_act

    Posted Fri December 18, 2009 05:33 AM

    Originally posted by: SystemAdmin


    hdisk3 is doing more work in terms of reads than hdisk2, although write wise they are doing the same which would be expected of mirrors. What is the hardware behind the LUN's and what queue_depth is set for the hdisks? I note there is a value in sqful and was wondering if this is consistently there?

    Thanks,
    Sam
    #AIX-Forum


  • 5.  Re: iostat: % tm_act

    Posted Fri December 18, 2009 05:46 AM

    Originally posted by: grukrz1


    the queue_depth is set to 8 according to our SAN team recommendations for that storage (HDS).
    #AIX-Forum


  • 6.  Re: iostat: % tm_act

    Posted Fri December 18, 2009 05:48 AM

    Originally posted by: SystemAdmin


    If the sqful remains consistently high than you might want to talk to them again to see if anything can be upped there but otherwise the additional reads on hdisk3 are the cause for the difference in activity.
    Cheers,
    Sam
    #AIX-Forum