Maximo

Maximo

Come for answers, stay for best practices. All we're missing is you.

 View Only
Expand all | Collapse all

DB2 Database Tuning for Better Maximo Report Performance – Let's Share Best Practices

  • 1.  DB2 Database Tuning for Better Maximo Report Performance – Let's Share Best Practices

    Posted Sun May 11, 2025 08:41 AM

    Hi everyone,

    I'd like to share a recent experience I had while working on performance tuning for IBM Maximo, specifically focusing on improving the execution time of BIRT reports that were running slowly due to complex queries.

    The Challenge

    We encountered a scenario where one of our BIRT reports was taking a long time to execute. The underlying SQL query was fairly complex, and we suspected that database-level optimizations might help improve performance.

    To address this, I reviewed and applied several DB2 tuning parameters at both the database and instance level to optimize how the queries were processed and how resources were managed.

    DB2 Tuning Parameters Applied

    Here are some of the key changes we made:

    DB2 Registry Settings

    db2set DB2_SKIPINSERTED=ON db2set DB2_INLIST_TO_NLJN=YES db2set DB2_MINIMIZE_LISTPREFETCH=YES db2set DB2_EVALUNCOMMITTED=YES db2set DB2_SKIPDELETED=ON db2set DB2_FMP_COMM_HEAPSZ=65536

    We restarted the DB2 services after setting these values:

    db2stop db2start

    Database Configuration (DB CFG)

    db2 update db cfg for maxdb76 using CHNGPGS_THRESH 40 db2 update db cfg for maxdb76 using DFT_QUERYOPT 5 db2 update db cfg for maxdb76 using LOGBUFSZ 1024 db2 update db cfg for maxdb76 using LOGFILSIZ 8192 db2 update db cfg for maxdb76 using LOGPRIMARY 20 db2 update db cfg for maxdb76 using LOGSECOND 100 db2 update db cfg for maxdb76 using LOCKLIST AUTOMATIC db2 update db cfg for maxdb76 using LOCKTIMEOUT 300 db2 update db cfg for maxdb76 using MAXFILOP 65535 db2 update db cfg for maxdb76 using NUM_IOCLEANERS AUTOMATIC db2 update db cfg for maxdb76 using NUM_IOSERVERS AUTOMATIC db2 update db cfg for maxdb76 using SOFTMAX 1000 db2 update db cfg for maxdb76 using STMTHEAP 20000 db2 update db cfg for maxdb76 using CUR_COMMIT ON db2 update db cfg for maxdb76 using AUTO_REVAL DEFERRED db2 update db cfg for maxdb76 using DEC_TO_CHAR_FMT NEW db2 update db cfg for maxdb76 using STMT_CONC LITERALS db2 update db cfg for maxdb76 using DATABASE_MEMORY AUTOMATIC db2 update db cfg for maxdb76 using PCKCACHESZ AUTOMATIC db2 update db cfg for maxdb76 using DBHEAP AUTOMATIC db2 update db cfg for maxdb76 using STAT_HEAP_SZ AUTOMATIC

    DBM Configuration (DBM CFG)

    db2 update dbm cfg using AGENT_STACK_SZ 1000 db2 update dbm cfg using RQRIOBLK 65535 db2 update dbm cfg using HEALTH_MON OFF db2 update dbm cfg using MON_HEAP_SZ AUTOMATIC db2 update dbm cfg using KEEPFENCED YES db2 update dbm cfg using FENCED_POOL 200

    Results

    After these tuning steps, we observed noticeable improvement in the report's execution time and the overall responsiveness of Maximo during report generation.


    I'd Like to Hear From You!

    If you've faced similar performance issues in Maximo - especially with slow BIRT reports or large SQL queries - I'd love to hear:

    • What tuning parameters or strategies worked best for you?

    • Have you used any DB2 advisors (like db2advis) or indexing strategies for specific reports?

    • Any lessons learned from past tuning efforts?

    Let's use this thread to share best practices and help each other optimize Maximo performance on DB2!

    Best regards,
    Mohamed Ghareeb
    Maximo Technical 



    ------------------------------
    Mohamed Ghareeb
    ------------------------------


  • 2.  RE: DB2 Database Tuning for Better Maximo Report Performance – Let's Share Best Practices

    Posted Tue May 13, 2025 03:51 AM

    I would be wary about focusing in solely on the database level settings.

    There are other areas that will probably deliver more long term benefit particularly as the system is extended.

    Problems with slow BIRT reports

    ==

    In my experience individual BIRT reports often run slowly because of how they are designed e.g. the report isn't using indexes effectively or is executing queries that merge lots of data together when they could be better written using sub datasets.

    How are the queries in a report built?

    ==

    If a SQL developer is just introduced to BIRT then it is normal to see a single dataset with a massive piece of SQL in it. The SQL will perform lots of JOINs / /OUTER JOINS and be very complex....

    The effect of this is to force the database engine to create one massive pile of data and then sort/search it and return the results.

    A better approach is to create additional datasets that can retrieve data from specific tables when particular conditions are met e.g. when an attribute type is a particular value. This can save a lot of work

    Separarting the queries out also makes it much easier to identify which queries are the most inefficient so the use of indexes can be tuned.

    How much data is the report retrieving?

    ==

    If a BIRT report is retrieving massive amounts of data then that can cause problems at the database and JVM / WebSphere server level - in one case a user's desire for their own "database" created massive performance problems because they were retrieving over 5 years worth of workorders.... on a regular basis....

    Working with them led to changes in working practices and more analysis being performed on the database/in the JVM which saved lots of time/processing - https://www.linkedin.com/pulse/too-much-data-killed-server-mark-robbins/

    Warning about setting database parameters to optimise a specific task e.g. Report generation

    ==

    BIRT reports are generally used to read data and it is critical to understand if the database is also being used for writing e.g. if it is being used for the general Maximo activity like saving records.

    Optimisations designed to speed up reading can slow down write activities so changing these settings could speed up the BIRT reports but slow down record creation etc.

    A read replica database used for reporting type activities can deliver a real improvement particularly if additional tables are created that combine data from other tables.

     

    I would be wary about setting these values:

    db2set DB2_SKIPINSERTED=ON
    db2set DB2_SKIPDELETED=ON

    These control how the database handles uncommitted rows and Maximo/MAS will have been coded on the assumption that the database will behave in particular ways.

    IBM have published advice about performance tuning the DB2 for Maximo/MAS

    ==

    IBM have published a Best Practice For System Performance Guide for 7.6 (https://www.ibm.com/support/pages/maximo-76-performance-best-practice-guide) and they also have published a page about diagnosing DB2 performance problems ( https://ibm-mas.github.io/mas-performance/pd/db2-performance-diagnosis/ )

    db2advis

    ==

    Personally I don't tend to rely on the db2advis tool very much because it takes a simplistic approach to indexes.

    Last time I checked it works as follows.

    If a where clause is inefficient then it checks to see which columns are being included.

    It then checks to see if the indvidual columns have their own index - if they don't then it can recommend creating an index.... even if that column is actually part of a composite index and other columns are in the where clause or could be added

    So following the recommendation could lead to a situation where a single column is indexed unnecessarily.

    How do I work more efficiently when investigating performance problems?

    ==

    Having supported Maximo systems for over 10 years I rely more on the data dictionary and Maximo's list of indexes. I have then built a tool that allows our teams to quickly see the tables / indexes / columns and understand how they are being used / indexed....

    Using the BMXAA6720W Slow SQL Warnings - SQL Logger must be WARN or below...

    ==

    these warnings are critical and resolving the issues that they highlightt can deliver real benefits - http://www.linkedin.com/pulse/individual-bmxaa6720w-slow-sql-entries-maximo-logs-need-mark-robbins

    Cohesive recognise that these warnings are so important that our analyser provides a custom report that summarises / groups and analyses these SQL statements. Projects/customers that have used the report have delivered great performance improvements because they can very quickly see where they can deliver improvements

    Note that queries executed as part of BIRT reports aren't captured by the SQL logger so the report related loggers need to be changed to catch these problems

    Look at frequently executing code e.g. escalations & start centres...

    ==

    Escalations are the number one thing that I see that kills database performance.... A badly built/configured escalation that runs frequently can kill a database's performance because it fills up the cache and delays other pieces of SQL

    Some escalation problems - https://www.linkedin.com/pulse/escalation-query-problems-part-1-mark-robbins/

    https://www.linkedin.com/pulse/escalation-problems-query-part-2-mark-robbins/

    Start centres are another killer depending on how they are being used - https://www.linkedin.com/pulse/using-sql-tracing-start-centre-performance-problems-mark-robbins/

    Last thought

    ==

    Having problems - Help / training is available.... Cohesive give our support staff specific training on how to work with the logs / identify performance problems.

    Our automated Log Analyser finds lots of problems and provides a summary/information that can be quickly acted on

    We also offer this training to customers so they can use it.



      ------------------------------
      Mark Robbins
      Support Lead/Technical Design Authority / IBM Champion 2017 - 2023
      Cohesive (previously Vetasi Limited)
      https://www.linkedin.com/pulse/maximo-support-advice-from-non-ibm-engineer-article-mark-robbins/
      ------------------------------