IBM Destination Z - Group home

Performance Strategy

By Destination Z posted Mon December 23, 2019 03:33 PM


Previously I outlined the themes and highlights of DB2 12 for z/OS, including some high-level performance expectations. Let’s move on to look at a developing strategy to deliver performance and scalability improvements by the exploitation of large real memory, particularly the use of in-memory data structures optimized for performance.

The two main items up for discussion are in-memory contiguous buffer pools and in-memory index optimization. These two enhancements are significant not just because of the performance improvements, but also because Gartner now classify DB2 12 as an in-memory database. We’ll wrap with a summary of memory management usability and scalability enhancements.

In prior releases of DB2, some of the performance improvements were achievable without necessarily making use of such large real memory as is available for DB2’s use. Many of the DB2 12 enhancements, however, require clients to provision more real memory for DB2 to exploit before they can realize significant performance gains.

As a general comment on real memory provisioning, it’s worth remembering that customers should plan to avoid all paging and make sure that they have sufficient free real memory to run safely.

In-Memory Contiguous Buffer Pools

In-memory contiguous buffer pools is the latest step in a journey started in DB2 10 with the introduction of the PGSTEAL(NONE) buffer pool attribute, which indicates that page stealing in the buffer pool isn’t expected. This was designed for objects such as table spaces and indexes that can fit entirely into a buffer pool. It may seem obvious, but the larger the object, the larger the buffer pool, and the larger the buffer pool, the more real memory is needed.

With a PGSTEAL(NONE) buffer pool, the pages for each object are prefetched into the buffer pool when it is first accessed, avoiding the cost of subsequent I/Os, saving CPU and providing elapsed time benefits. In both DB2 10 and 11, once the object has been read into the buffer pool, prefetch is disabled for all subsequent I/Os, which saves CPU, but hash chains and buffer chains are still maintained. The CPU cost of this is visible for GETPAGE intensive workloads exploiting large buffer pools. And because prefetch is disabled, if the buffer pool is too small there is page stealing, then two things happen:

1. All read I/O is synchronous
2. Page stealing uses the first in, first out (FIFO) algorithm

The in-memory contiguous buffer pool introduced in DB2 12 improves performance and reduces CPU consumption by providing direct page access in memory—DB2 development has measured up to an 8 percent CPU reduction for online transaction processing (OLTP) workloads using this feature. Direct page access greatly reduces the CPU overhead of Get Page and Release Page operations, and is achieved by laying out objects contiguously in page order in the buffer pool, and then accessing the page directly in memory.

To make this feature more resilient, DB2 12 introduces an overflow area that’s automatically managed by DB2. An overflow area is reserved by DB2 from the buffer pool VPSIZE specification, and represents 10 percent of the buffer pool size, up to a maximum of 6400 buffers. It’s only used by DB2 if the objects assigned to the buffer pool don’t fit into the “main” area of the buffer pool—90 percent of VPSIZE. The overflow area is allocated when the buffer pool is allocated, but is only backed by real storage when it’s actually used for overflow pages. Any pages in the overflow area are automatically managed by DB2 on a least recently used (LRU) basis, not a FIFO basis. DB2 doesn’t allow page stealing within the main buffer pool area, but it’s possible in the overflow area.

When you change the PGSTEAL attribute of a buffer pool to PGSTEAL(NONE), this becomes a pending state and takes effect the next time the buffer pool is allocated. When you do this, you need to take the overflow area into account—a good general recommendation is to add 6400 to VPSIZE. Note that a PGSTEAL(NONE) buffer pool ignores the AUTOSIZE(YES) attribute.

The output for -DISPLAY BUFFERPOOL now tells you the current and pending page steal algorithm, tells you the size of the overflow area, and provides statistics for the overflow area, including the GETPAGE count and the synchronous read I/O count. Similar information is available via the statistics trace.

Because PGSTEAL(NONE) is designed for performance, it’s important to know when the overflow area is being used, as this will result in degraded performance. Therefore, DB2 12 introduces a new message, DSNB605I, to tell you when DB2 reads pages into the overflow area.

In-Memory Index Optimization

As customer tables grow larger and larger, index sizes inevitably grow too. When an index grows, the number of index levels can also grow, causing an increase in the cost of random index access. This is because each additional index level means an extra GETPAGE.

Prior to DB2 12, DB2 made use of index lookaside, to try to avoid the full cost of index probing. However, this typically only benefitted skip sequential access via the index. Random SQL only benefitted from index lookaside occasionally, resulting in additional GETPAGE requests.

To make index lookups faster and cheaper, DB2 12 introduces the Index Fast Traverse Block (FTB). This is an in-memory structure that only contains index non-leaf pages. It is optimized to provide fast lookups for random index access. FTBs are targeted at—and restricted to—unique indexes with a key size of 64 bytes or less. Unique indexes with include columns are also supported, provided the total length is 64 bytes or less. FTBs are stored in the FTB area; this requires additional real memory and resides outside the buffer pool, in a memory area managed by DB2.

As with all new features, you will be keen to know how you can exploit and manage FTBs. FTB usage is controlled at a systemwide level by a new system parameter, INDEX_MEMORY_CONTROL. This allows you to turn off fast index traversal altogether by specifying DISABLE. Alternatively, you can specify the size of the FTB area explicitly, or you can accept the default setting of AUTO and let DB2 calculate the size for you. If you accept the default, DB2 will set the size of the FTB to 10 MB or 20 percent of the total allocated buffer pool storage, whichever is larger, up to a maximum of 200 GB. There is also a new catalog table, SYSIBM.SYSINDEXCONTROL, which allows you to control fast index traversal at the index level. You can choose to disable FTB usage altogether for an index, or enable it only during specific time windows. To make this feature easier to use, you can use wild cards to specify which indexes you want to control. This works in a very similar way to the SYSIBM.SYSINDEXCLEANUP table, introduced in DB2 11 to control the cleanup of pseudo-deleted index entries.

A new index memory optimization daemon enabled for z Systems Integrated Information Processor offload monitors regularly monitors qualifying indexes to identify the best FTB candidates. Indexes that have frequent leaf page splits are usually poor candidates, but indexes that are mostly used for read access are typically good candidates. When DB2 detects the processing profile for an index has changed, it can delete an FTB for a poor candidate or create an FTB for a good candidate. This can happen, for example, when an update-intensive batch schedule starts and stops.

SELECT, INSERT, UPDATE AND DELETE can all benefit from index structures cached in the FTB area because DB2 can do very fast traversals through the non-leaf page index levels without having to do page-oriented access. The savings can be substantial depending on the number of index levels.

You can monitor FTB area usage using the new DISPLAY STATS command, which shows what indexes are using the FTB area. For a detailed trace of FTB activity, you can use two new IFCIDs, 389 and 477.

Figure 1: CPU Improvement Using Fast Index Traversal

Using measurements taken by DB2 development, Figure 1 shows that simple random index lookup is faster and cheaper in DB2 12. It also shows that the greater the number of index levels, the greater the expected CPU savings, varying from 6 percent for a two-level index to 23 percent for a five-level index. Index only access will show higher percentage CPU saving than index and data access.

You can only exploit the FTB area after new function activation. If you’re not familiar with ANFA, just think of it as being the DB2 12 equivalent of new function mode. I’ll cover the topic of migration and new function activation in a later article.

Other Memory-Related DB2 Enhancements

We’ll wrap up with a look at other DB2 12 memory-related enhancements.

First, DB2 12 architecture supports very large buffer pools—up to a theoretical limit of 16 TB. Why theoretical? Because the current z/OS limit is 4 TB of real memory per LPAR. And of course, you should always plan to have your buffer pools fully backed by real memory, with sufficient free real memory to run safely.

Next, DB2 12 adds a new buffer pool advisory mode, which has been retrofitted to DB2 11. There are two new buffer pool attributes: SPSIZE and SPSEQT. These indicate the additional simulated buffers and the simulated sequential steal threshold respectively. DB2 monitors reads into the buffer pool and keeps a record of which I/Os you’d avoid if the buffer pool size was set VPSIZE + SPSIZE. New instrumentation in the DB2 statistics trace and in the output of the -DISPLAY BUFFERPOOL command indicates the synchronous I/O savings you’d get by increasing the size of your local buffer pool. There is a small CPU overhead associated with using buffer pool simulation—about 1 to 2 percent per buffer pool—and about a 1 percent real memory overhead, assuming a 4 KB page size. Early experiences show that this is a significant improvement on other buffer pool simulation technologies.

Finally, there have been improvements in storage (memory) pool management to simplify and remove scalability inhibitors for the environmental data manager pools, and for LOB and XML storage at the application and at the system level. For the EDM skeleton and DBD pools and the dynamic statement cache, “hard” limits, in terms of maximum pool size, have been removed. The EDM ZPARMS—EDM_SKELETON_POOL, EDMDBDC and EDMSTMTC—still indicate the upper limit for the pool, but there will no longer be EDM pool full failures if the storage needed exceeds the limit. Instead, once the pools reach the upper limit, DB2 will free unused storage using an LRU algorithm. These are, in fact, benefits gained from the overall goal to reduce latch contention, thereby reducing CPU consumption as well as system “pinch” points that can become significant when the system is under stress. The XML and LOB memory-related ZPARMS are removed (i.e., LOBVALA, LOBVALS, XMLVALA and XMLVALS) and DB2 will use 2 GB for each agent (xxxVALA) and 50 GB at the system level (xxxVALS). This is because many customers found that they couldn’t work out accurate settings for these ZPARMs, and had to rely on guesswork.

The next article will look at a vital topic for many customers: DB2 12 enhancements for insert performance.

Learn More
Join the double webcast series DB2 12 Technical Overview PART 1 and DB2 12 Technical Overview PART 2.

Gareth Z. Jones has worked in IT since 1985, Until 2000, he was an IBM client, with experience as a systems programmer and DBA. He is now works in DB2 for z/OS development, as a member of the SWAT Team, which is led by John Campbell. He has worked with many customers around the world to help them be successful in their use of DB2. He has written several technical papers and presented at many conferences and many group meetings. He can be contacted via email at