Informix

 View Only
  • 1.  Migration

    Posted Fri March 26, 2021 03:26 AM
    Dear All,



           Currently We are working on IDS 11.70 and most probably in couple of month will move to 14.10.

            Our data size is around 1TB and all chunks are of 2GB size and page size 2K and most of the tables having rows in millions e.g. 10M , 15M etc.

            What would be the best way to migrate to new version?

           if we use ONBAR then it will copy the same structure and new server will also have same 2GB size chunk and DBEXPORT/IMPORT will be very very slow.

            Currently I'm thinking to have page size 4/8K and chunk size Arnd 8-10 GB.

            Kindly advise.


    Thanks
    Amit

    ------------------------------
    AMIT PATEL
    ------------------------------

    #Informix


  • 2.  RE: Migration

    IBM Champion
    Posted Fri March 26, 2021 07:13 AM
    Amit:

    Pagesize should be determined by the rowsize ideally with a view to minimize waste. What do I mean? If you have a 510 byte row, only three rows will fit on a 2K page which will waste about 484 bytes on every page or 161 bytes per row. If the table has 4million rows that will waste over half a GB of storage! On a 4K page you could fit seven rows and still be wasting about 490 bytes per page but there are 50% fewer pages used so that's only 69.7 bytes per row wasting only 1/4GB for the table. However put that table onto 16K pages and 31 rows fit on a page and the 488 wasted bytes work out to only 15.7bytes per row or less than 63MB of storage.
    If you message me directly on email I will send you my script to determine the ideal pagesize for every table. Now, that said, you don't want to have eight different pagesizes, so using the output from the calculation that the script performs for you, you can select a smaller number of page sizes that may not be perfect for every table, but maybe waste a bit more but an acceptable amount. For example putting or 510 byte table on 14K pages along with 20 other tables if it is the only one that needs 16K pages only wastes an additional 13bytes per row which isn't too bad. 

    Indexes should be placed on the widest pages you can get. All but trivially small indexes perform better on wider pages (at least 8K but ideally 16K).

    As for chunk size, you want to balance the desire to reduce the number of chunks to simplify storage management and the understanding that the engine can perform more parallel writes on more chunks. What I have been doing since extendable chunks and the storage pools were introduced is to use a single chunk for each dbspace and mark it extendable so that it grows on its own until the filesystem is full. If multiple filesystems have been provided for Informix storage, then I also configure each filesystem as a storage pool so that if a dbspace's first chunk fills its filesystem (obviously with the help of other extending chunks) the storage pool mechanism will automatically create a new chunk in one of the other filesystems/storage pools. Then no one ever has to create chunks manually. You only have to watch that you still have filesystem space and manage that! The only caviat is that the engine will not extend a chunk or add a chunk during an archive so you may need to manually extend a chunk or add a chunk (preferably using the API function to do that from a storage pool) before starting an archive if an active dbspace is nearly filled. 


    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 3.  RE: Migration

    IBM Champion
    Posted Fri March 26, 2021 08:44 AM
    Hi Amit,

    I assume that you're not talking about an in place upgrade, but are planning to migrate as part of this upgrade.

    You should use "external tables" to unload/load your larger tables. They work very well.

    Load tables as RAW if possible to minimize logging, which requires that there are no PK/FK constraints defined on the table (even if they are disabled).

    If many of your tables are small, you can still use dbexport with the -no-data-tables option which allows you to list tables to be excluded from the export.  Exclude the larger tables and migrate those using external tables.  Otherwise you can use dbschema to create an empty database and then unload/load the tables using a mix of SQL, dbload (load only), external tables, and even HPL - depending on the level of effort you want to put in and time constraints to perform the migration.

    Mike
    xDB Systems, Inc.




    ------------------------------
    Mike Walker
    ------------------------------



  • 4.  RE: Migration

    IBM Champion
    Posted Fri March 26, 2021 09:45 AM

    You can use ER to move the data, might take a while but you don't have that much data.   It will allow you to move to a new physical layout.

     

     

     






  • 5.  RE: Migration

    Posted Mon March 29, 2021 02:35 AM

    Use Enterprise Replication (ER/CDR) to move the data to the new system.

    With ER you have the option to test the new platform and play around with it.

    When you are ready with testing, you can "Check and Repair" the data on the Target and when this is finished, you can switch over when ever you want within Minutes.



    ------------------------------
    Gerd Kaluzinski
    ------------------------------



  • 6.  RE: Migration

    IBM Champion
    Posted Mon March 29, 2021 04:22 AM
    Hi Amit,

    yes, migration to a new server is a good time to re-think about tuning the config, schema and bufferpools. But the way is very depend ...from many things and IT production requirements. I agree Mike and Gerd that ER is a smart way. But a In-Place / automatic release migration from 11.70 to 14.10 will work fast, too. If your new platform is binary compatible, you can work on storage-device level to switch to the new server ( re-mount the devices or mirror with Veritas or similar storage tools) .

    And finally, it's a typical consulting/professional service task for my team at leolo IT  :-) We offer such services to our clients.

    Regards
    Henri


    ------------------------------
    Henri Cujass
    leolo IT, CTO
    Germany
    ------------------------------



  • 7.  RE: Migration

    IBM Champion
    Posted Mon March 29, 2021 01:37 PM

    When we did this several years ago, I used Enterprise Replication to enable and complete the migration. That meant I had to go and add primary keys to several tables that didn't have them, but once that was all set, it allowed us to run in parallel for a while before doing the final cutover. This was especially worthwhile because we were moving to a new hardware platform, a new OS and a new DB version. (At the time, from IDS 11.10/IDS 11.50 on AIX/Power7 to IDS 12.10 on Linux x86_64.) If you do this, just be sure to do all of the initial syncs in the foreground, lest you grind your system to a halt.

    On page size, concur with what Art wrote: On our systems, we have three or at most four page sizes in play. 16 KB always for indexes (and never anything else, as I like having a dedicated buffer pool for just indexes); 14 KB for some wide-row tables; 2 KB for everything else. We have one system where I also needed to add 8KB pages, but that was a special case.






  • 8.  RE: Migration

    IBM Champion
    Posted Mon March 29, 2021 02:10 PM
    You can add replcheck and erkey without a table outage - ie a fast alter rather than a slow alter. It takes about the same time 

    Cheers
    Paul

    Paul Watson
    Oninit LLC
    +1-913-387-7529
    www.oninit.com
    Oninit®️ is a registered trademark of Oninit LLC