Hi Art,
thank you for your input.
Your guess about a number of varchar fields in the table is correct. These are only filled partly, which would explain
the lower number of consumed pages for a 2k+ rowsize. (if MAX_FILL_DATA_PAGES is set to 1)
When looking at the real data length, there is room to reduce the column length in some cases in order to get to
a total row size < 2k, which would be beneficial.
As I said, the table is long-grown, it was probably initialized in an 11.70 instance and was upgraded to 14.10 in the meantime.
But MAX_FILL_DATA_PAGES is currently set to 0 on the server, which makes me wonder why the actual data does not consume min. a page
per row.
Does it make a difference if the data is copied using "insert into ... select * from" in dbaccess compared to single inserts from a JDBC based app ?
Does the JDBC driver have any influence ? I would expect no. Otherwise we could copy the data with a JDBC app instead ....
The parameter has been added in 11.x according to the documentation and I doubt it was ever changed on the instance.
I would assume this is mostly a result of inplace alter table activity resulting in this not-so-nice row length
which probably left the older rows untouched (which have been smaller).
We will give it a try on a restored server instance with a copy table which has some columns reduced in length
(to a total of < 2k per row, 10% under 2k would be better if I understand you right in order to leave room
for possible updates (which are rare)) on a 8k or 16k page dbspace
and set the MAX_FILL_DATA_PAGES to 1.
It should be easy to copy the data identically since the table has a sequential counter and a modify timestamp.
Very old data is typically not touched at all.
We will see where this is going, keep you informed.
The message that the hard limits are subject to change this year probably is good news. There are a couple of things
we already touched in real life ...
(number of tables in exclusive lock state, SBLOB metadata has been an issue in the past, number of extents reached max.)
Best,
Original Message:
Sent: 1/17/2023 11:30:00 AM
From: Art Kagel
Subject: RE: Table reaching npused limit
Marcus:
The descrepancy between your calculated number of pages and the actual number of pages is most likely NOT because of IPAs but rather because the table has significant VARCHAR and LVARCHAR columns. The reported rowsize, 2269 bytes, does not even fit on a single page and would cause each row to have 2020 bytes written to a full page all by itself and the remaining 249 bytes to be written, with seven similar tails of other rows, onto a remainder page. So, if your rows were fixed length, the 20839369 rows would require 23816421 pages. Therefore your rows MUSt be variable length. The reported row size actually counts the maximum length of each variable length column as others have pointed out.
So, why then did the copy of the table need more pages? Well, the engine will not place a variable length row on to a page if its maximum length would not fit into the free space unless you have MAX_FILL_DATA_PAGES set to 1 to allow for variable length rows to grow after being inserted. Even if you have MAX_FILL_DATA_PAGES set, the engine will want 10% of the page to be free after inserting the new row. however, if your rows are ever updated after being inserted, then you may have had two or more smaller rows on a page that then grew such that the engine would only fit one of them onto a page in the new copy and that's ignoring rows that are bigger than a page.
Your only choice in Workgroup Edition is to move the table onto wider pages. I would calculate the actual average and maximum length or tows in the table and use that to determine what page size dbspace will waste the least storage. For example, if you use 8K pages and all rows are maximum length (unlikely) you will be wasting about 1359 bytes per page or 453 bytes per rowand storing three rows on a page. Compage that to using 16K pages where seven maximum length rows will fit and only waste 463 bytes per page or 66 bytes per row.
Now you can't use those calcs because your rows are not all 2269 bytes long, so you have to do the work to figure out what page size will be ideal to minimize storage waste and allow your table to grow.
Art
Art S. Kagel, President and Principal Consultant
ASK Database Management
Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference. Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.
Original Message:
Sent: 1/17/2023 9:17:00 AM
From: Marcus Haarmann
Subject: Table reaching npused limit
Hi all,
I have a huge db here, where a central table is part of.
We recently ran into an issue with a huge table (181 000 000 rows), reaching the 16773048 pages limit,
which we temporarily solved by archiving parts of the table.
Now we were searching for other tables which might reach the limit soon.
We found the following table.
tabid rowsize ncols nrows tabtype npused fextsize nextsize pagesize -------- ---------- -------- -------- ---------- --------- ----------- ----------- -----------
220 2269 157 20839369 T 11858498 20000 20000 2048
The weird thing is: multiplying the number of rows with the rowsize would be very much
higher then the npused value in systables.
My assumtion would be that the (very old) table has evolved through time (by in place alters) and old rows did not reach
the same row length resulting in a lower page usage.
Some time in the future, the npused will probably reach the limit. We tried to copy data to an identical table
with different extent sizes, which did not help at all but reached the limit of max 16 mio pages very fast.
This is probably because in the new table, all rows will consume the whole row length.
So extent sizes do not really matter (even if the error message implies there is an extent problem), but the max for npused
was reached.
Since this is a WE, fragmentation is not an option.
The choice we have from my point of view is either archiving parts of the table (deleting rows, which is not that easy,
because the customer insists to be able to access everything),
or putting the table to bigger dbspace (8k pages), where one page would probably be able to keep 3-4 rows
(maybe we can shorten some rows in order to get a row size < 2k, resulting in 4 rows per page -> npused would be ~5 000 000,
which is well below the limit).
Can you see any other options ?
Thank you for your thoughts !