When you think of Db2 for z/OS, “performance” is likely one of the first words that come to mind. But performance isn’t automatic—it’s designed, coded, and continually refined as your environment evolves. I remember when I first became a Db2 DBA ‘many moons’ ago, and my mentor introduced me to query tuning. He brought up on his screen what ‘looked’ to be a simple query that ‘looked’ like it would have no issues. He ran it, and we waited…
And waited…
And waited…(YOU GET THE PICTURE!)
Once it completed, he introduced me to what would change the trajectory of my career forever: Db2 EXPLAIN. He gave me a great explanation of how to read an EXPLAIN, but also the ins and outs of how to improve a query’s performance.
Whether you're a seasoned DBA, an application developer, or a leader trying to reduce batch overruns and CPU costs, tuning SQL queries is one of the most impactful things you can do. In this post, I’ll walk you through 10 proven query tuning techniques that have delivered results in real-world Db2 for z/OS environments.
________________________________________
1. Start with Access Path Analysis
“The best SQL in the world can underperform with the wrong access path.”
It’s good practice to know and understand your data and how it works. Use an EXPLAIN to review how your query is being executed. Are you scanning entire tables? Using nested loop joins when hash joins would be better? Always start here and knowing your data before changing code—you can’t fix what you don’t understand.
________________________________________
2. Use the Right Indexes
Indexes can make or break performance. Missing indexes often lead to table space scans, while too many indexes can slow down insert/update/delete operations as well as object utilities like REORGS and RUNSTATS.
✅ Tips:
• Use Query Monitor to identify queries causing the most I/O.
• Use SQL PA’s “What If? Analysis” to test performance gains before deploying new indexes.
• Consider index-only access: make sure all referenced columns are in the index.
________________________________________
3. ** Avoid SELECT * **
I once worked with a team that had a nightly batch job that ran 2 hours longer than it should. The culprit? A SELECT * pulling 120+ columns across 4 joins—only 5 columns were ever used downstream.
✅ Fix: Explicitly name the columns you need. This reduces I/O, improves buffer pool usage, and allows better index matching. Naming columns also prevents applications from being vulnerable to schema changes.
________________________________________
4. Filter Early, Filter Often
Filtering early allows later processing to do its work with less data and therefore faster processing. Push WHERE clauses as close to the data source as possible. Avoid filtering in the application layer or in subqueries when it can be done earlier in the join or the main WHERE clause.
✅ Example:
-- Good
SELECT * FROM CLAIMS WHERE ID = '0001';
-- Better (with index on STATUS)
SELECT ID, CLAIM_DATE FROM CLAIMS WHERE STATUS = 'OPEN';
________________________________________
5. Watch for Inefficient Joins
Use JOINs carefully—especially with large result sets. Mismatched data types, lack of indexes on join keys, or a predicate that doesn’t efficiently use an index to optimize execution can all sabotage your access path. Joining on already well filtered sources can increase performance.
✅ Tip: Use SQL PA to simulate access path changes when modifying joins.
________________________________________
6. Use Temporary Tables Wisely
Common Table Expressions (CTEs) and Global Temporary Tables can simplify logic, but be mindful: CTEs can be materialized multiple times if reused without optimization.
✅ For complex queries, consider breaking logic into intermediate declared global temporary tables and indexing them if reused.
________________________________________
7. Leverage RUNSTATS Regularly
The optimizer is only as smart as the stats it has. Outdated catalog statistics can cause poor access path choices.
✅ Use Admin Tool or RUNSTATS after:
• Large data loads
• REORGs
• DDL changes
Remember to know your data. Changing stats can also break your optimized paths. If it isn’t broken, be careful when you fix it.
________________________________________
8. Avoid Scalar Functions on Indexed Columns
This is a common hidden killer. Applying a function to an indexed column negates index usage.
❌ WHERE YEAR(ORDER_DATE) = 2025
✅ WHERE ORDER_DATE BETWEEN '2025-01-01' AND '2025-12-31'
________________________________________
9. Use LIMIT or FETCH FIRST for Row-Limiting
When only a subset of rows is needed (e.g., for dashboards, top 10 lists, pagination), always use FETCH FIRST n ROWS ONLY.
✅ Bonus: Db2 can use optimized fetch access paths when this is specified.
________________________________________
10. Test, Don’t Guess
Tuning without measurement is guesswork. Use tools like:
• SQL Performance Analyzer (SQL PA) to benchmark changes
• Query Monitor to track query-level CPU and elapsed time
• Db2 Admin Tool to analyze statement history and catalog statistics
✅ I always run a “before-and-after” analysis in SQL PA to validate that a tuning change truly delivers a net gain.
________________________________________
🧠 Final Thoughts
SQL tuning in Db2 for z/OS is both an art and a science. The key is discipline: monitor regularly, analyze before acting, and validate after each change. Change one thing at a time to see what helps and what doesn’t. You may end up changing multiple things, but it’s good practice to see the results of each vs the result of changing multiple things at once.
Whether you're chasing milliseconds or trying to reduce monthly CPU consumption, these techniques can help you get there. Depending on your role, working hand in hand with your respective counterparts (i.e. DBA, application developer, architect, etc.) on making sure your SQL and coding improvements will also be beneficial in successful tuning.
Let’s keep Db2 fast, efficient, and legendary!