IBM i Global

IBM i 

Connect, learn, share, and engage with IBM Power.


#Power
 View Only
  • 1.  RGZPFM *OPTIMIZE and Update Sequence

    Posted Mon July 29, 2024 12:21 PM

    Yes, we know there are ways around using RGZPFM. For this particular project, we need to stick with RGZPFM.  Yes, we know we should not have 53 logical files on a table with 1004711796 records and that is after the reorg. 

    When we use *OPTIMIZE, what I read says the system will do the rebuild asynchronously. I see only two jobs running that appear to be these jobs. 

    We have a CLLE we use for other purposes that submits a job for each logical on the file to reset the maintenance to *IMMED and then simply open and close the file to rebuild the access path. This job submits to a multithreaded job queue (5) so the access paths get updated in larger groups.

    The pain point is that users started accessing the file this morning before the system finished processing the rebuilds. Yes, we will start the reorg on Friday night next time.   

    1) Should we use *YES instead of *OPTIMIZE or would that take forever as the rebuilds would take place one file at a time? 

    2) Using *OPTIMIZE, would it be safe to run the CLLE program as well.  Would that simply double the work or is *OPTIMIZE smart enough to know the path has been rebuilt by another process since the reorg? 

           



    ------------------------------
    David Taylor
    Sr Application Developer
    Range Resources
    Fort Worth
    ------------------------------


  • 2.  RE: RGZPFM *OPTIMIZE and Update Sequence

    Posted Tue July 30, 2024 12:31 AM
    Edited by Satid S Tue July 30, 2024 03:06 AM

    Dear David

    For boosting index rebuild speed, you need to utilize Db2 SMP. You need to install DB2 SMP option first and then run CHGQRYA DEGREE(*NBRTASKS  n) before running RGZPFM.  The value of 'n' depends on how many active CPU cores you have in your LPAR.  I would use a rule of thumb of n=3 (for Power8 CPU onwards) per each active core.  You should check PDI graph on CPU utilization during this access path rebuild period. If there is still some CPU power left, you can increase the value of 'n' in subsequent rebuild.  If you do not want to run CHGQRYA, you have another option to set the system vale QQRYDEGREE to *OPTIMIZE and it will run 2 rebuild tasks per virtual processor configured in your LPAR (not adjustable - so I prefer CHGQRYA).  One last choice, as of IBM i 7.1, is to set an environment variable QIBM_WC_QDBSRV_JOBS to *CALC or *MAX  (details here: SE51094: OSP QDBSRV QCMNARB START MORE SYSTEM JOBS at  https://www.ibm.com/mysupport/s/defect/aCI3p0000008vBA/dt311015?language=en_US   ). 

    You should also allocate as much memory as possible to memory pool *BASE prior to starting the RGZPFM and to leave as much CPU as possible for this task. 

    This Technote may provide you with more useful info:  Questions, Answers, and Tips on RGZPFM - Improving Its Performance at https://www.ibm.com/support/pages/questions-answers-and-tips-rgzpfm-improving-its-performance      



    ------------------------------
    Satid S
    ------------------------------



  • 3.  RE: RGZPFM *OPTIMIZE and Update Sequence

    Posted Tue July 30, 2024 08:23 AM

    Thanks @Satid S.  We did find out that using the CLLE with OPNDBF FILE(&LIB/&LF) OPTION(*INP) for each related file took care of the multiple feeds.  We ended up with the two system-generated rebuilds and five running from the CLLE. We did find some help from this discussion. Once the CLLE picked up a file, it dropped of the list displayed using EDTRBDAP. My user profile does not have access to the command, so I used the ACS Schemas -> Database -> Database Maintenance -> Index builds and index rebuilds screen to monitor progress.      



    ------------------------------
    David Taylor
    Sr Application Developer
    Range Resources
    Fort Worth
    ------------------------------



  • 4.  RE: RGZPFM *OPTIMIZE and Update Sequence

    Posted Wed July 31, 2024 08:29 AM

    David, FWIW I had a similar issue, only we needed to copy the data to new versions of the database.  We had some files with 100+ logical files, some with up to 6 record formats.  I setup a front-end program to dump the DBR and Keys to outfiles, removed the logical file members, copied the data to the physicals, and lastly added a back-end program to add the members back to the logical files in Descending key complexity order.  As I recall this sped up the process significantly.

    Mike.



    ------------------------------
    Mike Overlander
    ------------------------------