Power Global

Power Global

A central meeting place for IBM Power. Connect, ask questions, share ideas, and explore the full spectrum of Power technologies across workloads, industries, and use cases.


#TechXchangePresenter
#Power

 View Only
  • 1.  IBMi Performance Problems with NFS (Client)

    Posted Wed December 17, 2025 10:23 AM

    Hi together,

    I'm experiencing performance issues with the NFS client on IBM i with V7R4 TR11.

    I want to use the following options in the mount command:

    ===> ADDMFS TYPE(*NFS) MFS('nfs-server:/') MNTOVRDIR('/nfs') OPTIONS('rsize=524288,wsize=524288')

    I'm getting CPFA1BB back with the option keyword "rsize not within the range".

    1. However, these options are not being accepted. What would be the correct values ​​for rsize/wsize?

    2. Can I force the client to use TCP instead of UDP?

    3. Does anyone have a better idea how I can optimize NFS on my IBM i?

    Thanks in advanced

    Falko



    ------------------------------
    Falko Huetter
    Consultant
    PROFI Engineering Systems AG
    Darmstadt
    0049 6151 82900
    ------------------------------


  • 2.  RE: IBMi Performance Problems with NFS (Client)

    Posted Thu December 18, 2025 02:39 AM
    Edited by Satid S Thu December 18, 2025 03:07 AM

    Dear Falko

    The max values for rsize and wsize is 8096 (8K) and this is a default already when OPTIONS is not specified.  

    1) Have you check if any other client(s) to this NFS server (say, from Windows/Linux PCs) does not experience the same performance issue accessing the same target NFS server file(s)? If IBM i is the only NFS client to have the performance issue, check a few factors in IBM i memory pool number 2 (*BASE) with WRKSYSSTS command as IBM i NFS client job runs in this memory pool by default. 

    2) First check the parameter MAX ACT of pool 2. It should have a value of 1,000 or higher. This is because, by default, pool 2 runs very many threads of many IBM i OS jobs which includes IBM i NFS server and client by default, If the MAX ACT value of pool 2 is much less than 1,000, it can be a possible cause of your problem and you should change it to at least 1,000. Increasing MAX ACT of a memory pool generally can be done on-line with no disruption to the entire system. (Decreasing should be done at low workload period.)  

    3) If action 2 does not solve the issue, then check if memory pool 2 has high faulting rate or not while you use NFS client?  High fault rate is a value in the range of 1,000 or more (assuming IBM i server uses HDDs, as opposed to SSDs). Press F10 to see  instantaneous value (NOT F5 which shows average value over elapsed time).  If it is high, you need to increase the size of pool 2 to reduce the faulting rate down.   

    If you still have the problem after taking all the above actions, you  need to explain what you did in some details that leads you to the performance issue.



    ------------------------------
    Satid S
    ------------------------------



  • 3.  RE: IBMi Performance Problems with NFS (Client)

    Posted Thu December 18, 2025 05:23 AM

    Is your LPAR a VIO client using Virtual Ethernet? If so have you looked at the Virtual Ethernet buffer settings on the VIOs?  This can have a big effect with NFS for AIX, it would be the same for any OS using vEth.

    Phill.



    ------------------------------
    Phill Rowbottom
    Unix Consultant
    Service Express
    Bedford
    ------------------------------



  • 4.  RE: IBMi Performance Problems with NFS (Client)

    Posted Thu December 18, 2025 06:20 AM
    Edited by Satid S Thu December 18, 2025 06:21 AM

    One more factor I just recalled. IBM i TCP send and receive buffer size.  (Receive size should be relevant here for your IBM i NFS client but it is also a good practice to set a proper send size as well.) 

    The default TCP send and receive buffer size of 64KB can be too small and cause performance issue when consistently large data transfer amount is involved and fast physical connection speed is used. You should increase it if you know such large data xfer is a norm in your IBM i server. I would say you should set it to at least  some 200KB.  This setting affects every TCP socket connection. Use CHGTCPA command to change the buffer sizes and they can be changed on-line with no disruption but the new buffer size is active only for new connections made after the buffer value change.



    ------------------------------
    Satid S
    ------------------------------



  • 5.  RE: IBMi Performance Problems with NFS (Client)

    Posted Thu December 18, 2025 03:00 PM

    Hello Falko,

    Since you are at V7R4 you need to force the NFS client to TCP:

    1. Start the BIO daemon on the client "STRNFSSVR SERVER(*BIO)"
    2. "CALL QP0FPTOS (*NFSFORCE UDP OFF)" at each IPL prior to mounting the NFS paths.

    Regards,

    Tsvetan



    ------------------------------
    Tsvetan Marinov
    ------------------------------



  • 6.  RE: IBMi Performance Problems with NFS (Client)

    Posted Thu December 18, 2025 03:18 PM

    Hello Falco,

    You might be in the same situation I was a few years back:  facing inexplicably poor performance of NFS transfers.

    After a long winded research, what ultimately made a night-and-day difference was forcing all NFS transfers to use TCP instead of UDP (which is how NFS protocol was originally designed). There were multiple reasons why NFS had subpar performance and reliability issues with UDP. Some lay within the NFS client implementation (UPD transfers use single connection/single thread while TCP ones do not have that restriction), while some others are outside of our control (i.e. poor UDP packet handling and reliability when they pass through managed switches and especially security appliances/routers as our packet captures revealed, while TCP transfers do generally get treated better).

    Once switched to TCP, the business with NFS transfers became rock solid with zero issues and we pretty much use them now in "set it and forget it" fashion.

    Here is an example of a CLP code I execute once upon every IPL to force any consequently issued ADDMFS commands to use TCP rather than UDP:

    /* =========================================================================  */
    /* =========*/  subr ForceTCP      /*=======================================  */
    /* =========================================================================  */
                                                                                    
    /* Forcing TCP instead of UDP as the transport protocol */                      
                                                                                    
    /* At R730 and NFSV3 you can force NFS to use TCP instead of UDP but it will  */
    /* be for all NFS traffic not just this one nfs mount command or one server.  */
                                                                                    
    /* To utilize TCP as an NFS client, you will need to do  the following:       */
                                                                                    
    /*   0.  Make sure the user profile executing the following commands is       */
    /*       enrolled in system directory. Verify with WRKDIRE command beforehand.*/
                                                                                    
                                                                                    
    /*   1.  Start the BIO daemon on the client                                   */
                                                                                    
                 STRNFSSVR  SERVER(*BIO)                                            
                                                                                    
    /*   2.  At each IPL prior to mounting the NFS paths:                         */
                                                                                    
                 CALL       PGM(QP0FPTOS) PARM(*NFSFORCE UDP OFF)                   
                                                                                    
                                                                                    
    endsubr      
    

    For your inspiration, here are ADDMFS parameters I use on all mount commands. Note that the 32768 size for both send and receive buffers is the maximum design value, and that is also what IBM Support recommended for optimum performance.

    OPTIONS('RW,SUID,RETRY=5,RSIZE=32768,WSIZE=32768,TIMEO=20,RETRANS=5,ACREGMIN=30,ACREGMAX=60,ACDIRMIN=30,ACDIRMAX=60,SOFT') CCSID(*BINARY *ASCII)

    Hope this helps,

    Roman



    ------------------------------
    Roman Chloupek
    ------------------------------



  • 7.  RE: IBMi Performance Problems with NFS (Client)

    Posted Fri December 19, 2025 03:23 AM

    Thanks a lot, the NFS client now works with TCP and is significantly faster.



    ------------------------------
    Falko Huetter
    Consultant
    PROFI Engineering Systems AG
    Darmstadt
    0049 6151 82900
    ------------------------------



  • 8.  RE: IBMi Performance Problems with NFS (Client)

    Posted Fri December 19, 2025 03:49 AM

    FYI larger size for NFS read and and write buffers are supported at V7R5 and above.

    rsize=n
    For the mount of a Network File System, specifies the size of the read buffer in bytes. The read buffer is used for data transfer between the NFS client and the remote NFS server on an NFS read request. The allowed range is 512 to 524288. If rsize is not specified, the default value of 32768 is assumed. For better performance, the read buffer should be a multiple of the application buffer size.
    wsize=n
    For the mount of a Network File System, specifies the size of the write buffer in bytes. The write buffer is used for data transfer between the NFS client and the remote NFS server on an NFS write request. The allowed range is 512 to 524288. If wsize is not specified, the default value of 32768 is assumed. For better performance, the write buffer should be a multiple of the application buffer size.

    Regards,

    Tsvetan



    ------------------------------
    Tsvetan Marinov
    ------------------------------