AIX

AIX

Connect with fellow AIX users and experts to gain knowledge, share insights, and solve problems.


#Power
#Power
 View Only
Expand all | Collapse all

Limited I/O throughput on WPAR with "exported" device

  • 1.  Limited I/O throughput on WPAR with "exported" device

    Posted Wed April 29, 2015 04:47 PM

    Originally posted by: OSamir


    Hello
    I note  I/O disk write limit issue on filesystem (rw mount) with "exported" hdisk device in WPAR and Versioned WPAR53.

    The Global Environment writing 400MB/s in filesystem while the WPAR do not exceed 75MB/s with a 85% Wait% on "exported" hdisk. 
    I don't understand why Wait% is too high...

    Example:

    IBM Power7 EC 8.0 - 40GB memory - AIX 7100-03-04 -  NPIV client - XIV Storage

    - Global (AIX 7100-03-04) is a client NPIV with 2 Fiber Channel (hdisk0).

    - WPAR is "rootVG wpar" on "exported" disk (hdisk1). => mkwpar -n wpar01 -D devname=hdisk1 rootvg=yes

    - WPAR datavg is on "exported" disk (hdisk2). => chwpar -D devname=hdisk2 wpar01

    iozone write test do not exceed 75MB/s on a filesystem in WPAR datavg.

    Thank's for your help

    Sam

     


    #AIX-Forum


  • 2.  Re: Limited I/O throughput on WPAR with "exported" device

    Posted Thu April 30, 2015 09:06 AM

    Originally posted by: MattDulson


    Hi,

    Chris Gibson posted a great blog entry today that might be relevant here.

     

    https://www.ibm.com/developerworks/community/blogs/cgaix/entry/queue_depth_setting_in_a_versioned_wpar_on_aix?lang=en

     

    Matt


    #AIX-Forum


  • 3.  Re: Limited I/O throughput on WPAR with "exported" device

    Posted Thu April 30, 2015 09:51 AM

    Originally posted by: OSamir


    Very good news !!! Thank's Matt and Chris for his quality feedback... I'm always a fan.


    Same issue in WPAR 7.1 - see below

    hdisk2 will be the futur datavg in WPAR

    Global # lsattr -El hdisk2 -a queue_depth
    queue_depth 40 Queue DEPTH True

    in kdb hdisk2 = 0xF1000A01C0A58000

    Global #echo scsidisk 0xF1000A01C0A58000 | kdb | grep queue_depth
    ushort queue_depth = 0x28;

    Global #echo hcal 0x28 | kdb
    (0)> hcal 0x28
    Value hexa: 00000028 Value decimal: 40

    Global # chwpar -D devname=hdisk2 wpar01

    Global #echo scsidisk | kdb
    (0)> scsidisk
    "scsidisk_list" address...[0x0]
    NAME ADDRESS STATE CMDS_OUT CURRBUF LOW
    dac0 0xF1000A0150688000 0x00000001 0x0000 0x0000000000000000 0x0
    hdisk0 0xF1000A0150684000 0x00000002 0x0000 0x0000000000000000 0x0
    hdisk0 0xF1000A01C0A54000 0x00000002 0x0000 0x0000000000000000 0x0
    hdisk1 0xF1000A01C0A58000 0x00000002 0x0000 0x0000000000000000 0x0

    now Global hdisk2 is hdisk1 in WPAR

    Global # echo scsidisk 0xF1000A01C0A58000 | kdb | grep queue_depth
    ushort queue_depth = 0x1;

    Bad queue_depth in Kernel.
    Very interesting diagnostic, big up !

    -----------------------------------------------------------------------------------------------------

    The workaround for PdAt ODM class work very fine also for WPAR AIX 7.1 with exported device. With this new attribute, queue_depth have correct value in kernel ... and write performance return to the normal.

    Thank's IBM support :))


    The end

     


    #AIX-Forum