Join / Log in
Learn how to increase the operational efficiency of the assets you manage, and improve overall equipment effectiveness by using IoT data and AI.
Reduce the operational costs of the facilities you manage, and create more engaging occupant experiences through the application of IoT data and AI.
Learn how IoT data and AI are being applied to transform the end-to-end engineering lifecycle.
Honestly, my preferred approach is to test it in a non-production environment and configure an APM (Application Performance Monitoring) tool against the Maximo environment that includes some sort of code profiling feature. This is not to be confused with the IBM Asset Performance Monitor product which serves a completely different purpose. Application Performance Monitoring is a monitoring tool that provides deeper insights than standard monitoring tools. This allows you to identify bottlenecks in your application, even if you aren't the one who wrote the product (like in this case). It'll help you see how much time is being spent with database queries (and which database queries are taking the longest). And if it has code profiling (tracking how long it spends time in the various java methods) it'll give you understanding where in the code Maximo is spending the most time.
As an example, we found that Maximo was executing a count query in part of the logic for asset specs. While each count query was executing quickly (<4 ms), the query was being executed hundreds of thousands of times to download all the asset data so it added almost 30 minutes to the total time to retrieve all the assets. These sorts of issues are really difficult to identify without a tool like that.If you're trying to do it without using APM tooling, try to look at the individual network requests to see if it's certain requests that are taking a significant time (IE is one request taking 4 seconds) or are they each taking a low amount of time? In a test in our DEV Maximo system with actual data, we have the uxfailurecode query against the mxapiasset taking 1.93 seconds for example and that's our longest request. That request is looking for all assets where failurecode is null (you can lookup the query in the object structure application and choosing Query Definition), with a pagesize of 50 records. There's not really a way to improve the query itself without changing how it functions (such as filtering to just the user's site). So unless I noticed something in code profiling, there isn't really much I could do with that particular request.
To help troubleshoot you can consider lowering mxe.db.logSQLTimeLimit in a non-production environment for anything that takes longer than 150 ms instead of the default 1000 (which is good for a production environment, otherwise you might have too much noise). If you see queries related to the work center, see if they could be improved somehow. Anything less than 150ms the odds of you being able to significantly improve the performance in the query are typically slim.