Informix

 View Only
Expand all | Collapse all

dbexport and dbimport - partitioned table

  • 1.  dbexport and dbimport - partitioned table

    Posted Mon July 24, 2023 07:58 AM

    Hello good day! in Informix IDS 10 I need to duplicate a database in another server with the same characteristics. The database has a very large table that is partitioned in several fragments. When I do dbexport and dbimport I get an error with this table:

    Is there any way for dbexport/import to keep the partitioning definition? if not, what other way do you recommend?

    Thank you very much!



    ------------------------------
    guillermo villanueva
    ------------------------------


  • 2.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Mon July 24, 2023 08:06 AM

    Run the dbexport with the "-ss" option.  This will preserve the database specific elements, such as dbspaces, extent sizes, lock modes, and the fragmentation clause.

    The target instance must have these same dbspaces.



    ------------------------------
    Mike Walker
    xDB Systems, Inc
    www.xdbsystems.com
    ------------------------------



  • 3.  RE: dbexport and dbimport - partitioned table

    Posted Mon July 24, 2023 08:29 AM

    As Mike said - the "- ss" is 99 % obligatory. Just keep in mind to remove old "<dbname>.exp" directory before new execution with "-ss" option.

    After that you can change(if it is necessary):

    • lock mode
    • name of dbspaces, blobspaces, sbspaces
    • partition/fragmentation strategy and so on...

    Regards,



    ------------------------------
    Boycho Velkov
    ------------------------------



  • 4.  RE: dbexport and dbimport - partitioned table

    Posted Mon July 24, 2023 08:47 AM

    Thank you!



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 5.  RE: dbexport and dbimport - partitioned table

    Posted Mon July 24, 2023 08:46 AM

    Thank you very much Mike, I will try that option and then let you know.



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 6.  RE: dbexport and dbimport - partitioned table

    Posted Wed August 02, 2023 08:15 AM

    Good morning, I was able to do the export with the -ss option for the partitioned table and I got all the information without any problem, but when I go to do the import in the other server, the process does not finish and if I look at the status with onstat -x or -u I can see there are locks, with no other users accessing. Is it possible to put some additional parameter in the dbimport or change the lock mode of the table (currently it is in lock mode page)? 



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 7.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Wed August 02, 2023 08:19 AM

    Guillermo:

    First, yes, all of your tables should be run in ROW level locking mode, not page, unless this is a data warehouse with only slow changing data. Second, you might want to have the table imported as a non-logged database (or modify the <database>.sql file so that this one table is created as a RAW table the alter it back to STANDARD after the import completes. That will be faster and require fewer locks.



    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 8.  RE: dbexport and dbimport - partitioned table

    Posted Thu August 03, 2023 07:09 AM

    Thanks for your answer, I have modified the .sql file generated by the dbexport changing the lock mode to row in all tables, now all tables have that mode, including the fragmented one that gives me problems.
    Art, I don't know how to create a RAW type table and then modify it to standard, can you tell me how please?
    When I execute the dbimport it stays in the "unload file name..." of the big table and apparently it does not advance any more.
    Here are some screenshots of the informix status. 



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 9.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Thu August 03, 2023 07:19 AM

    Guillermo:

    At create time (so in the .sql file) STANDARD mode is the default. For a RAW table you would:

    CREATE RAW TABLE ...

    For an existing table:

    ALTER TABLE mytab TYPE( RAW );

    -or-

    ALTER TABLE mytab TYPE( STANDARD );



    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 10.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Thu August 03, 2023 07:23 AM

    Guillermo:

    On the "hang", what does the schema in the .sql file look like now that you exported with -ss? I'm wondering about extent sizing and whether the tca_texto blob column is "in table" or in a blob space.



    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 11.  RE: dbexport and dbimport - partitioned table

    Posted Thu August 03, 2023 07:44 AM

    Again, thank you very much...

    create table "informix".tca 
      (
        org_codigo varchar(6),
        tcc_codigo varchar(3),
        hca_numero integer,
        dac_codigo varchar(6),
        hac_numero integer,
        hca_anio smallint,
        hac_anio smallint,
        tca_texto byte
      ) 
      fragment by expression 
        partition part_1 (hac_anio <= 11 ) in iurixdbs ,
        partition part_2 (hac_anio = 12 ) in iurixdbs ,
        partition part_3 (hac_anio = 13 ) in iurixdbs ,
        partition part_4 (hac_anio = 14 ) in iurixdbs ,
        partition part_5 (hac_anio = 15 ) in iurixdbs ,
        partition part_6 (hac_anio = 16 ) in iurixdbs ,
        partition part_7 (hac_anio = 17 ) in iurixdbs ,
        partition part_8 (hac_anio = 18 ) in iurixdbs ,
        partition part_9 (hac_anio = 19 ) in iurixdbs ,
        partition part_10 (hac_anio = 20 ) in iurixdbs ,
        partition part_11 (hac_anio = 21 ) in iurixdbs ,
        partition part_12 (hac_anio = 22 ) in iurixdbs ,
        partition part_13 (hac_anio = 23 ) in iurixdbs ,
        partition part_14 (hac_anio = 24 ) in iurixdbs ,
        partition part_15 (hac_anio = 25 ) in iurixdbs ,
        partition part_16 (hac_anio = 26 ) in iurixdbs ,
        partition part_17 (hac_anio = 27 ) in iurixdbs ,
        partition part_18 (hac_anio = 28 ) in iurixdbs ,
        partition part_19 (hac_anio >= 29 ) in iurixdbs  
      extent size 262768 next size 26276 lock mode row;

    revoke all on "informix".tca from "public";



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 12.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Thu August 03, 2023 08:19 AM

    Guillermo:

    OK, I am assuming that the iurixdbs dbspace is default pagesize (2K? 4K?) and the tca_texto column is being stored in-table so in the same dbspace. I do see that the extent sizing is reasonably large (I feared that the table was being created with the default extent sizing of 8, 8), so that's not the issue.

    You are not using replication (HDR or RSS) so it might be better to move that column into a blobspace. I don't think that should cause the load of that table to hang though. I do notice from the onstat -x output you sent previously that you seem to be stuck in a long checkpoint. Are the logical logs full and not backed up? What do 'onstat -F' and 'onstat -R' show?



    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 13.  RE: dbexport and dbimport - partitioned table

    Posted Thu August 03, 2023 08:34 AM

    Hello Art, Here I show you the results of the commands you request:



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 14.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Thu August 03, 2023 10:01 AM

    Guillermo:

    The FGWrites number of almost 103,000 is a telling smoking gun. That value should ALWAYS be zero (OK an occasional 1 is acceptable). You either have a serious IO performance problem or a serious server configuration problem! You should NEVER see Foreground Writes in your server.

    If this is a production server or otherwise you want professional help with this, reach out to me directly or through my web site (https://www.askdbmgt.com/).



    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 15.  RE: dbexport and dbimport - partitioned table

    Posted Thu August 03, 2023 10:08 AM

    Thank you very much for your comments and help, it is a test server where I try to do the dbexport. 



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 16.  RE: dbexport and dbimport - partitioned table

    Posted Thu August 10, 2023 09:54 AM

    Hello! I wanted to tell you that I have been able to solve the problem, observing carefully the informix error log I found that there was a problem with the transaction log configuration. I basically configured it and the dbimport worked, I say basically because I can not understand the operation of the transaction log in informix ids 10 (in my case I come from postgres). If you have any article to read that you recommend me, I would appreciate it.
    Greetings!



    ------------------------------
    guillermo villanueva
    ------------------------------



  • 17.  RE: dbexport and dbimport - partitioned table

    IBM Champion
    Posted Thu August 10, 2023 10:05 AM

    Some background. Informix uses two sets of logging of changes to the database on disk.

    1. Physical log - saves an unmodified copy of each page modified since the last checkpoint the first time the page is modified after that checkpoint (so only one copy of a page exists in the physical log regardless of how many items on the page have changed).
    2. Logical logs (there are many) - records all changes to the database and the server's infrastructure (ie adding or dropping storage space, etc.).

    At checkpoint time all pages that have been modified up to the start of the checkpoint are flushed to disk, their in-memory copies are marked as clean, and the contents of the physical log are cleared.

    During normal shutdown, a checkpoint is performed which, as noted, clears the physical log.

    Recovery scenario #1: the server or its host system crashes. First the server engine verifies that all storage can be accessed successfully. Assuming that this check passes the server engine examines the physical log. If its length is zero then the engine knows that the server was shutdown normally. If the physical log length is greater than zero then it was not shutdown normally and a process known as fast recovery is implemented:

    1. Flush all physical log pages to disk which operation returns all pages modified since the last completed checkpoint back to their contents and condition as of the beginning of that checkpoint.
    2. Beginning with the timestamp marking the beginning of the last checkpoint logical log records are used to roll forward changes made to the database between that checkpoint and the time of the crash.
    3. At the end of the roll forward operation if there are any uncommitted transactions they are rolled back.
    4. A checkpoint is performed and the server's status is raised to "online".

    Recovery scenario #2: the server or its host system crashes. First the server engine verifies that all storage can be accessed successfully. If this test fails, the engine will determine a) if it is permitted to startup with non-critical storage offline, and b) if any critical storage is unusable. If a) is false or b) is true the engine will abort startup with a notice. Otherwise the engine will note which storage is not reachable, mark them unusable in memory and on disk, and continue as in scenario #1 after posting a message log entry to that effect. If the startup aborted the only option is to restore the server from archives (assuming there are any). If startup aborts, then recovering from this condition involves:

    1. determine the reason for the startup failure.
    2. correct any storage problems as necessary
    3. restore the latest full archive
    4. restore any incremental archives more recent than the last full archive (requested by the archive utilities)
    5. restore logical log records created after the start of the last restored archive (automated by the archive utilities)
    6. start the engine in forced recovery mode (automated by the archive utilities)
    7. the engine goes through fast recovery as in scenario #1

    Other possible scenarios include:

    • recovery from fumble fingered users/DBAs damaging data or database structure (ie deleting or modifying data that should not have been deleted or modified).
    • restoring the database to a previous point in time
    • restoring the database to a cold backup system
    • restoring the database on a warm backup system
    • recovering database operations by switching to a hot backup system