Amit:
Pagesize should be determined by the rowsize ideally with a view to minimize waste. What do I mean? If you have a 510 byte row, only three rows will fit on a 2K page which will waste about 484 bytes on every page or 161 bytes per row. If the table has 4million rows that will waste over half a GB of storage! On a 4K page you could fit seven rows and still be wasting about 490 bytes per page but there are 50% fewer pages used so that's only 69.7 bytes per row wasting only 1/4GB for the table. However put that table onto 16K pages and 31 rows fit on a page and the 488 wasted bytes work out to only 15.7bytes per row or less than 63MB of storage.
If you message me directly on email I will send you my script to determine the ideal pagesize for every table. Now, that said, you don't want to have eight different pagesizes, so using the output from the calculation that the script performs for you, you can select a smaller number of page sizes that may not be perfect for every table, but maybe waste a bit more but an acceptable amount. For example putting or 510 byte table on 14K pages along with 20 other tables if it is the only one that needs 16K pages only wastes an additional 13bytes per row which isn't too bad.
Indexes should be placed on the widest pages you can get. All but trivially small indexes perform better on wider pages (at least 8K but ideally 16K).
As for chunk size, you want to balance the desire to reduce the number of chunks to simplify storage management and the understanding that the engine can perform more parallel writes on more chunks. What I have been doing since extendable chunks and the storage pools were introduced is to use a single chunk for each dbspace and mark it extendable so that it grows on its own until the filesystem is full. If multiple filesystems have been provided for Informix storage, then I also configure each filesystem as a storage pool so that if a dbspace's first chunk fills its filesystem (obviously with the help of other extending chunks) the storage pool mechanism will automatically create a new chunk in one of the other filesystems/storage pools. Then no one ever has to create chunks manually. You only have to watch that you still have filesystem space and manage that! The only caviat is that the engine will not extend a chunk or add a chunk during an archive so you may need to manually extend a chunk or add a chunk (preferably using the API function to do that from a storage pool) before starting an archive if an active dbspace is nearly filled.
------------------------------
Art S. Kagel, President and Principal Consultant
ASK Database Management Corp.
www.askdbmgt.com------------------------------
Original Message:
Sent: Fri March 26, 2021 03:26 AM
From: AMIT PATEL
Subject: Migration
Dear All,
Currently We are working on IDS 11.70 and most probably in couple of month will move to 14.10.
Our data size is around 1TB and all chunks are of 2GB size and page size 2K and most of the tables having rows in millions e.g. 10M , 15M etc.
What would be the best way to migrate to new version?
if we use ONBAR then it will copy the same structure and new server will also have same 2GB size chunk and DBEXPORT/IMPORT will be very very slow.
Currently I'm thinking to have page size 4/8K and chunk size Arnd 8-10 GB.
Kindly advise.
Thanks
Amit
------------------------------
AMIT PATEL
------------------------------
#Informix