Hello Mark,
you don't need a storage pool to extend chunks. You only need to set that chunk extendable:
$ echo 'EXECUTE FUNCTION task("modify chunk extendable on", 4);' | dbaccess sysadmin - # set chunk#4 extendable
$ echo 'EXECUTE FUNCTION task("modify chunk extend", 4, "8GB");' | dbaccess sysadmin - # extents chunk#4 8GB
That are the steps to manually extend chunks. If you want that informix expands dbspaces by itself you need some more
steps:
$ onmode -wf SP_AUTOEXPAND=1
$ onmode -wf SP_THRESHOLD=262144
$ onmode -wf SP_WAITTIME=300
Now ALL dbspaces and it chunks with extandable flag set to on are expanded when there getting full.
In online.log you can see that it work:
05/11/20 18:21:25 Chunk 7 in space 'datdbs01' has been extended by 131072 kb.
05/11/20 18:38:20 Chunk 7 in space 'datdbs01' has been extended by 8000000 kb.
05/11/20 19:21:23 Chunk 7 in space 'datdbs01' has been extended by 131920 kb.
05/11/20 19:25:41 Chunk 7 in space 'datdbs01' has been extended by 8000000 kb.
This does not work with smart blobs and blob dbspaces and during backup:
05/13/20 09:26:45 Extend chunk 8 failed. System archive in progress. Try again later.
05/13/20 09:26:45 Extend chunk 4 failed. System archive in progress. Try again later.
05/13/20 09:26:45 Extend chunk 6 failed. System archive in progress. Try again later.
05/13/20 09:26:45 Extend chunk 9 failed. System archive in progress. Try again later.
Since you don't want to expand some dbspaces like rootdbs, tmpdbs, llogdbs and plogdbs you need to that
these dbspaces to not expandable:
$ echo 'EXECUTE FUNCTION task("modify space sp_sizes", "rootdbs", 0);
$ echo 'EXECUTE FUNCTION task("modify space sp_sizes", "plogdbs", 0);
$ echo 'EXECUTE FUNCTION task("modify space sp_sizes", "llogdbs", 0);
$ echo 'EXECUTE FUNCTION task("modify space sp_sizes", "tmpdbs", 0);
The job which is doing that is the sysadmin "mon_low_storage" job.
You can modify the interval between every start like that (for example every 10 minutes):
$ dbaccess -e sysadmin - <<EOF
UPDATE ph_task SET tk_start_time = "00:00:00",
tk_stop_time = "00:00:10",
tk_frequency = INTERVAL (10) MINUTE TO MINUTE,
tk_next_execution = ROUND(CURRENT, 'HH')::DATETIME YEAR TO SECOND
WHERE tk_name = "mon_low_storage";
UPDATE ph_task SET tk_next_execution = ROUND(CURRENT, 'HH')::DATETIME YEAR TO SECOND
WHERE tk_name = "mon_low_storage";
UPDATE ph_task SET tk_next_execution = ROUND(CURRENT, 'HH')::DATETIME YEAR TO SECOND
WHERE tk_name = "mon_low_storage";
EOF
This also works with raw devices each on top of a logical volume. On Linux you need some udev rules to create the raw
device /dev/raw/rawX and the sysmlink to the logical volume.
Cheers,
Markus
------------------------------
Markus Holzbauer
------------------------------
Original Message:
Sent: Wed January 27, 2021 03:58 PM
From: Mark Collins
Subject: curious about extendable chunks
We're in the process of migrating to 14.10 from a version (never mind how old) that did not have storage pools. I attended sessions at IIUG about storage pool and the ability to automatically add space as a dbspace filled up. As I understood it at the time, it would do this by allocating a new chunk from the available space allocated to the storage pool.
Now that I'm actually working with 14.10 in a test system, I see that it also has the ability to extend an existing chunk, rather than just adding a new one. I've looked at the Administration Reference, and I've not seen any information on how this actually happens. From memory, it seems that a chunk was supposed to be a contiguous block of pages, and I'm trying to understand how that works with extendable chunks.
In the case of cooked files, it makes sense, so long as you use a single OS file for each chunk. The instance could suddenly extend the chunk, the OS would simply extend the file, and the instance could still reference a page as an offset of some number of bytes from the start of the file, corresponding to (page_number * page size) from the beginning of the chunk. Of course, the cooked file could be broken up into multiple pieces on the disk, with pieces of other files interwoven, preventing it from being contiguous on the disk, but from the perspective of the file, they'd appear to be contiguous.
Where it gets fuzzier is in the case of a raw device, or even a cooked file that contains multiple chunks, each created as some offset from the beginning of the device/file. In that case, it seems like you would no longer be able to count on a chunk being a single contiguous block of pages. So, in the past, I might have something like this on a single raw device, where the *_chk2 and *_chk3 are added as the database grows:
device: /ifmx_links/ifmx_disk1 -> /dev/vg01/rdsk1
chunk offset size
tbldbs_chk1 0 500000
idxdbs_chk1 500000 100000
tbldbs_chk2 600000 500000
tbldbs_chk3 1100000 500000
idxdbs_chk2 1600000 100000
So if you have extendable chunks, assuming that the device above was part of the storage pool, would tbldbs_chk1 simply allocate more pages, similar to what I manually did when I added tbldbs_chk2 in the past, even though those pages would not be contiguous to the existing pages in the chunk? Is there a performance impact by having that situation? Or are chunks only extendible so long as nothing has been allocated (via the '-o' parameter of onspaces) immediately past the original end point of the chunk, so that those pages do end up being contiguous?
As I said, I tried looking in the manual, but I didn't see where it addressed either the question of "how does it work" or "is there a performance penalty".
Thanks in advance.
------------------------------
Mark Collins
------------------------------
#Informix