Db2 for z/OS and its ecosystem

 View Only

Enhanced Db2 semantic SQL functions with the new vector prefetch capability of SQL Data Insights

By Neena Cherian posted 21 days ago


Introduced in Db2 13, SQL Data Insights brought artificial intelligence (AI) functionality to the Db2 for z/OS engine. It provided the capability to run SQL AI query to find valuable insights hidden in your Db2 data and help you make better business decisions.

SQL DI prepares AI functions by generating numeric vectors for each unique value it discovers in a Db2 table during model training.  When the built-in AI functions are used against the large number of vectors (i.e., large number of distinct values per column), the current AI query process could result in less than ideal query performance because the vectors in the model table are processed row by row for scoring calculation.

The latest SQL DI enhancement improves the query processing by prefetching vectors in batches for scoring calculation and saving the scores in a cache. Along with other updates, such as the code path length optimization of the built-in functions, the vector prefetch capability significantly improves the processing of SQL DI AI queries and the overall scoring performance.

There are a few factors that you should consider before you enable the enhancement. This blog will explain the enhancement, requirements, high level considerations, and performance results. 

What is vector prefetch?

To put it simply, vector prefetch means that instead of row-by-row processing, Db2 uploads and passes multiple numeric vectors in batches to the IBM Z AI Optimization (zAIO) services in z/OS for scoring calculation. SQL DI uses the most optimum method chosen by the IBM Z AI Optimization (zAIO) to calculate the vector similarities and stores the scores in an allocated Db2 memory or AI cache.

The vector prefetch capability is available through a new set of maintenance (PTFs) for all SQL DI components. To enable this capability, Db2 adds a new subsystem parameter MXAIDTCACH in panel DSN6SPRM where you specify the maximum memory to allocate for AI caching. The AI cache is used for storing the model column values and scoring results which eliminates the need for repeated requests from Db2 to z/OS for scoring calculation if the values are found in cache. The AI cache is allocated above the bar per thread. It is also used for sparse index space and hash area to speed up vector search.


The vector prefetch capability and related updates require the following set of maintenance PTFs for z/OS, Db2, and SQL DI user interface (UI). You must apply them in the order as follows:

  1. For IBM Z AI Optimization (zAIO) library and IBM Z AI Embedded (zADE) library in the IBM Z Deep Neural Network (zDNN) stack on z/OS®: 
    • Apply OA63950 and OA63952 for z/OS 2.5 (HZAI250).
    • Apply OA63949 and OA63951 for z/OS 2.4 (HBB77C0).
  2. For OpenBLAS on z/OS:
    • Apply PH49807 and PH50872 for both z/OS 2.5 and z/OS 2.4 (HTV77C0).
    • Apply PH50881 for z/OS 2.5 (HLE77D0).
    • Apply PH50880 for z/OS 2.4 (HLE77C0).
  3. For Db2 13 for z/OS, apply PH51892. Follow the instructions for DDL migration outlined in ++ HOLD text.
  4.  For SQL Data Insights 1.1.0 UI and model training (HDBDD18), apply PH51052.

If an AI object is already enabled for AI query, make sure to retrain (disable and enable) the object model to take advantage of the enhancement.

Considerations before enabling AI queries with vector prefetch

There are a few factors you need to consider before you utilize vector prefetch to execute AI queries:

1.       For best performance with vector prefetch, ensure that you specify an adequate AI cache size (MXAIDTCACH) per thread to hold all the model column values, scoring results, and the sparse index space with hash area. You can use the formula in the “Enhancing Db2 semantic queries with the vector prefetch capability of SQL Data Insights” paper to calculate the maximum memory required for caching all the scores of the vectors for a column. Insufficient AI cache allocation may result in degraded performance for AI queries even with vector prefetch enabled. If adequate space is not available, this will result in all the model column values and scoring results spilling to a work file on DASD. See measurement scenarios 1 and 2 for details. Because MXAIDTCACH is at the thread level, you will need to consider the memory required to run the estimated numbers of concurrent AI queries.

2.       With vector prefetch enabled, CPU performance for AI queries with AI function invocation on qualified rows improves particularly when the ratio of the cardinality of qualified rows to the total number of numeric vectors for the column is high. See measurement scenario 3 for details.

Performance benefits

IBM internal performance measurements demonstrated dramatic improvement in elapsed time and CPU time for queries with all three functions under the right conditions after the required maintenance was applied and vector prefetch was enabled. The following figure provides a snapshot of performance benefits from some of the measurements: 

The results came from queries that were run on a Db2 table with 8 columns and a cardinality of 1.3 million rows. The performance improvement with vector prefetch enabled was substantial. On average, with vector prefetch enabled, the performance of AI_SIMILARITY improved by 75%, AI_SEMANTIC_CLUSTER improved by 83%, and AI_ANALOGY improved by 72% when compared to that before applying the maintenance.

In conclusion, with sufficient AI cache, the vector prefetch capability can bring you substantial performance improvement and cost reduction of AI queries.  Of course, the improvement comes with the need of utilizing more memory particularly when the size of the data is large, and the number of concurrent threads is high. It’s recommended that you consider the vector prefetch enablement only after careful planning.