Informix

 View Only
  • 1.  Help with Informix 14.10 configuration

    Posted Tue February 25, 2025 01:15 PM

    Hello Informix family,

    Iam migrating Informix from Solaris 11 to Redhat 8 and at the same time from Informix 12.10 to 14.10.FC11


    1. On current Solaris we run on a virtual solaris zone with 56 cpu threads and we have good performance

    he new cpu on Redhat physical machine have the following 

    Architecture:        x86_64

    CPU op-mode(s):      32-bit, 64-bit

    Byte Order:          Little Endian

    CPU(s):              16

    On-line CPU(s) list: 0-15

    Thread(s) per core:  2

    Core(s) per socket:  8

    Socket(s):           1

    NUMA node(s):        1

    Vendor ID:           GenuineIntel

    CPU family:          6

    Model:               207

    Model name:          INTEL(R) XEON(R) GOLD 5515+

    Stepping:            2

    CPU MHz:             3200.000

    CPU max MHz:         4100.0000

    CPU min MHz:         800.0000

    BogoMIPS:            6400.00

    Virtualization:      VT-x

    L1d cache:           48K

    L1i cache:           32K

    L2 cache:            2048K

    L3 cache:            23040K

    NUMA node0 CPU(s):   0-15

    How many VPCLASS cpu can I configure?

    according to IFX manuals is the number of cores -1, but i watched a video for Lester, he said the new cpu have better performance so we can create VPCLASS cpu 2 -3/core.

    2. Regarding memory I have 512 and this server is dedicated only for the database

    I did allocate 300GB for Hugepage 

    what is recommended in this case can I allocate more ? if I have big buffer size, can this be a problem in case of any db crash ? or any other issue ? 

    3. Iam changing from 2k page size to 4k

    Can I even create the logdbs,tempdbs,phydbs(plogdbs) with 4k and what is the benefits ?

    I have few tables with records bigger than 4K, is it better to use 8k ? what can go wrong ?

    note when I only have those rows on onconfig, and init the instance it shows on the log it create a buffer for 2k , is it correct ?

    BUFFERPOOL default,

    BUFFERPOOL size=4K,

    4. I use to create the physical log as any dbspace and do changes with onparams to move from rootdbs, what is the different between the old way and the plogdbs ?

    5. I have watched many video about ifx performance all recommend to use 16k dbspaces for indexes, can you se any issue with that even if the indexes are small ?

    Many thanks in advance

    Regards

    /Layth Yousif



    ------------------------------
    Layth Yousif
    ------------------------------


  • 2.  RE: Help with Informix 14.10 configuration

    Posted Tue February 25, 2025 05:48 PM

    I am going to comment inline with your notes below in RED:


    Hello Informix family,

    Iam migrating Informix from Solaris 11 to Redhat 8 and at the same time from Informix 12.10 to 14.10.FC11

    Go to 14.10.FC11w1 instead, there are several bug fixed in the .FC11w1 patch release making it more stable than the FC11 release.

    1. On current Solaris we run on a virtual solaris zone with 56 cpu threads and we have good performance

    You currently have 7 physical Sparc cores with 8 SMT threads each.  Trading this for 8 Intel cores each with 1 main thread and 1 hyperthread. Best practice is to disable the hyperthreading for massive long running processes like Informix, so you really only have 8 threads instead of 56. That's a huge reduction in power. Note that Sparc SMT threads run at around 80% or better of the main thread, while an Intel Hyperthread runs at between 25% & 40% of the main physical thread and when utilized the performance of the main thread is reduced by several percentage points (all according to Intel documentation). I would recommend having at least 24 cores if you use Intel processors. I would also recommend that you would likely do better with AMD processors which offer 2 SMT threads per core both of which are usable! If you go with AMD you might get away with 16 cores.

    he new cpu on Redhat physical machine have the following

    Architecture:        x86_64

    CPU op-mode(s):      32-bit, 64-bit

    Byte Order:          Little Endian

    CPU(s):              16

    On-line CPU(s) list: 0-15

    Thread(s) per core:  2

    Core(s) per socket:  8

    Socket(s):           1

    NUMA node(s):        1

    Vendor ID:           GenuineIntel

    CPU family:          6

    Model:               207

    Model name:          INTEL(R) XEON(R) GOLD 5515+

    Stepping:            2

    CPU MHz:             3200.000

    CPU max MHz:         4100.0000

    CPU min MHz:         800.0000

    BogoMIPS:            6400.00

    Virtualization:      VT-x

    L1d cache:           48K

    L1i cache:           32K

    L2 cache:            2048K

    L3 cache:            23040K

    NUMA node0 CPU(s):   0-15

    How many VPCLASS cpu can I configure?

    according to IFX manuals is the number of cores -1, but i watched a video for Lester, he said the new cpu have better performance so we can create VPCLASS cpu 2 -3/core.

    Yes. Based on testing that I did with the help of several other DBAs at different client sites back in the day, we determined that a single CPU VP can take advantage of roughly 500MHZ of CPU cycles. Beyond that the wait time for IO and server overhead limited making further use of the available cycles. So, with a 3GHZ processor, it would be possible to run between 4 and 6 CPU VPs per thread (except for Intel Hyperthreads - do not count these if you have not disabled them altogether).

    2. Regarding memory I have 512 and this server is dedicated only for the database

    I did allocate 300GB for Hugepage

    what is recommended in this case can I allocate more ? if I have big buffer size, can this be a problem in case of any db crash ? or any other issue ?

    That should be plenty of huge pages unless you think that you will need more than that for your buffer caches. I would just copy the settings for SHMVIRTSIZE and SHMADD that you are using now on Solaris! For the buffer caches, see below where you ask about that specifically.  

    3. I am changing from 2k page size to 4k.

    Can I even create the logdbs,tempdbs,phydbs(plogdbs) with 4k and what is the benefits ?

    No. The rootdb, logical logs, and physical logs have to live in default page sized dbspaces. On all platforms except AIX and Windows that is 2K pages. You should create a 2K rootdb dbspace of about 500MB or larger by setting the ROOTSIZE parameter in the ONCONFIG file before creating the instance. You should create a separate 2K dbspace just for the logical logs and a PLOG type dbspace for the physical log (dong that will automatically move the physical log - no need to use onparams to move it now).

    I have few tables with records bigger than 4K, is it better to use 8k ? what
    can go wrong ?

    You should have a separate dbspace for wider tables. If a data row is wider than the dbspace's page size, then the tail end of the row will be gathered with a few other tails on a separate page requiring two IOs to fetch each row and two buffer cache pages to hold them. 

    In addition, since you have this opportunity to restructure your database, look into the waste on each 4K page for each significant table. For example, a 4K page can hold 4072 bytes of data and each row adds 4 bytes of overhead to its nominal length. So say you have a table with a 240 byte row. Each row needs 244 bytes so that means that 16 rows fit on a page with 168 bytes wasted on every page or 10.5 bytes per row. If that table has millions of rows that's a lot of waste! If you put that same table in an 8K dbspace then 33 rows fit on a page with 114 bytes waste per page or a bit less than 3.5 bytes per row. If you put that table in an 16K dbspace instead then 67 rows fit on a page with only 12 bytes wasted per page or less than 1 byte per row. So for my example table either 8K or 16K pages would be an major improvement over keeping it on a 4K page. I have scripts to perform the calculations for you because the calculation is more complex if your table's row size is variable (so VARCHAR, LVARCHAR, or other variable length data types included).

    note when I only have those rows on onconfig, and init the instance it shows on the log it create a buffer for 2k , is it correct ?

    OK, remember that the rootdb dbspace and logical and physical log dbspaces will still reside on 2K pages, so yes, you need a 2K bufferpool setting. Since you did not include an explicit BUFFERPOOL 2K entry the engine used the values in the "default" bufferpool setting (which is used to create any new bufferpool that needs to be created when you creat a new dbspace with a page size that there is not an entry in the ONCONFIG file for). 

    BUFFERPOOL default,

    BUFFERPOOL size=4K,

    Set the "default" entries using a reasonable number of buffers so that any dynamically created dbspaces will be useful and reasonably efficient without having to bounce the instance. For the 2K buffer pool, run onstat -P on your current server and count up the number of pages for all of the entries with a partnum that is seven digits starting with a '1'. Those are your current contents of the rootdb dbspace. Use that count to size your 2K bufferpool, assuming that none of your databases and user tables will reside in 2K dbspaces.

    The remaining buffer pool  pages, adjusted for page size, will be a good starting value for your 4K buffer pool. 

    4. I use to create the physical log as any dbspace and do changes with onparams to move from rootdbs, what is the different between the old way and the plogdbs ?

    Two differences between using a "normal" dbspace and a PLOG type dbspace for your physical log: 1) creating the PLOG space and moving the physical log is a single operation, so no need to run onparams afterwards, and 2) The PLOG dbspace is extendable by default, so if the engine needs to expand the physical log it can just make it bigger and the dbspace (assuming it is build in a filesystem file and not a raw device) will just grow.

    5. I have watched many video about ifx performance all recommend to use 16k dbspaces for indexes, can you se any issue with that even if the indexes are small ?

    OK, there is little or no performance advantage to using wider pages for an index on a small table, especially if the index key is itself narrow. The advantages of putting indexes on wider pages is twofold: 1) The performance improvement by pulling in more index keys and nodes with each index IO, and 2) You can have a wider maximum key size if the index resides on wider pages. A page size of 16K seems to be ideal for indexes for most larger tables. Note that I would recommend that indexes, regardless of the page size you choose for them, reside in a separate dbspace from your table data. So, ideally you should have a 2K or 4K dbspace for smaller indexes and a 16K dbspace for larger indexes in addition to having one or more dbspaces for your tables.

    Many thanks in advance


    Art S. Kagel, President and Principal Consultant
    ASK Database Management


    Disclaimer: Please keep in mind that my own opinions are my own opinions and do not reflect on the IIUG, nor any other organization with which I am associated either explicitly, implicitly, or by inference.  Neither do those opinions reflect those of other individuals affiliated with any entity with which I am affiliated nor those of the entities themselves.









  • 3.  RE: Help with Informix 14.10 configuration

    Posted Mon March 10, 2025 07:48 AM

    Hi Art,

    Many thanks for your time and effort.

    I have some questions related to previous questions

    1. Regarding the memory allocation, on Solaris I have 6GB SHMVIRTSIZE  and now I have more memory I will increase it to 10GB what do you think is it any issue having a big SHMVIRTSIZE ?

    2. Regarding buffers I had 9000000 for 2k and 1000000 for 4k Iam thinking to change the 2k to 40000 and the 4k to 20000000 what you think is it any issue having so many buffers?

    3. Regarding the page size, what is recommended for each 2k/4k page? the current blok size on filesystem is 8K, what is better to have 2k block size for 2k page dbspaces and 4k block size for the 4k page dbspace?

    4. We use veritas vxfs, how good and bad is it to use vxfs ? do we have other options to other type of type file system ?

    5. Regarding the plog I change it from physdbs to plog when I upgrade from 11.70 to 12.10 but last time, I get info from online log to increase the plog, I expect the engine to increased, but it didn't and I could not increase manually  .

    Many thanks in advance

    //Layth



    ------------------------------
    Layth Yousif
    ------------------------------



  • 4.  RE: Help with Informix 14.10 configuration

    Posted Mon March 10, 2025 09:01 AM

    Layth:

    All good questions, I will answer inline:

    1. Regarding the memory allocation, on Solaris I have 6GB SHMVIRTSIZE  and now I have more memory I will increase it to 10GB what do you think is it any issue having a big SHMVIRTSIZE?
      - There is no real issue around having "too much" virtual memory pre-allocated, only having so little that the engine adds many additional SHMADD sized virtual segments. So, look at your current system with onstat -g seg and see how many "V" segments there are. Add up their sizes (the engine doubles the size of added segments every 8th segment, so you can't just multiply the count by SHMADD), add that to the SHMVIRTSIZE and add one more SHMADD in case v14.10 needs more memory than v12.10 did (likely depending on your workload). If that number is larger than the 10GB you were planning, use the bigger number, otherwise the 10G should be fine. FWIW, you should use that calculation to increase SHMVIRTSIZE on the current system for now, until you get everything moved over.
    2. Regarding buffers I had 9000000 for 2k and 1000000 for 4k Iam thinking to change the 2k to 40000 and the 4k to 20000000 what you think is it any issue having so many buffers?
      I think that 40,000 is a bit low even for just having the ROOTDB dbspace and logs at 2K. I would suggest 100,000. Once you are online you can adjust up or down by monitoring the onstat -P report. Too many buffers is not an issue beyond using memory that the system could use for other purposes. Most important for buffer management on a busy system is having enough LRU queues for the number of concurrent sessions to avoid contention for the queues and buffers.
    3. Regarding the page size, what is recommended for each 2k/4k page? the current block size on filesystem is 8K, what is better to have 2k block size for 2k page dbspaces and 4k block size for the 4k page dbspace? Informix typically reads and writes either a single page or 8 pages at a time. Smaller storage blocking is better than large, but by large I'm talking about bigger than 128MB. The storage cache will perform better if it can prefetch many pages for reading and gather many adjacent pages into single block-size writes. So, up to a point, bigger blocks perform better. Just not the huge 512MB or even up to 4GB blocking that most SAN admins prefer.
    4. We use veritas vxfs, how good and bad is it to use vxfs ? do we have other options to other type of type file system ?
      I don't know vxfs well, but I will do some research for my own curiosity and let you know. I like XFS (it only journals metadata) and EXT3 (with journaling disabled).
    5. Regarding the plog I change it from physdbs to plog when I upgrade from 11.70 to 12.10 but last time, I get info from online log to increase the plog, I expect the engine to increased, but it didn't and I could not increase manually.
      Those warnings are ONLY warnings. If the physical log actually ever fills, it will self-expand. You cannot just expand it, but you can use onparams to create a bigger one in the same plog space. That will leave the space taken by the original physical log unused, however.


    ------------------------------
    Art S. Kagel, President and Principal Consultant
    ASK Database Management Corp.
    www.askdbmgt.com
    ------------------------------



  • 5.  RE: Help with Informix 14.10 configuration

    Posted 26 days ago

    Hi Art,
    Many thanks for your time and effort.
    The migration was successful, I would like to share some of my experiences with this migration. 

    1. Migration method 
    A. First test I use ER, then I find there was some strange behavior  on application level, which delete a record with serial datatype and inserted again, this result a lot of duplicate rows on target server, and it was impossible to limit those tables.
    B. Using your dbexport tools to run a parallel dbexport, was a bit tricky to compile it on Solaris, and I had not much time to verify the result.

    C. Using external table went very fast to unload the table parallel, even the load on target database was very fast, but after I drop the external table on target system and  try to take down the database, for tables with clob it tells me the table is in use, very strange, but had not much time to test again and verify .

    E. At the end I copy part of the cdr migrate scripts to migrate the data over network for 99% of tables.
    Tables with datatype text,clob took to long time to migrate through the network, I choose to unload/load them.
    This is the script I use and  I add the oncheck -cI to save a lot of time
    mylog=/path/$0.log
    CDATE=$(date "+%Y-%m-%d %H:%M:%S")
    echo "$CDATE:Start:$0" >>$mylog
    {
    dbaccess "dbname@ifx_local_servername" <<EOF
    alter table 'owner'.table_name type(RAW);
    insert into 'owner'.table_name select * from  dbname@remote_servername:'owner'.table_name;
    alter table 'owner'.table_name type(STANDARD);
    create index "owner".index_1 on "owner".table_name (xx)     using btree ;
    create index "owner".index_2 on "owner".table_name (yy)     using btree ;
    create the PK
    EOF
    oncheck -cI dbname:table_name -n >>$mylog
    } >> $mylog 2>&1
    CDATE=$(date "+%Y-%m-%d %H:%M:%S")
    echo "$CDATE:End:$0" >>$mylog
    Surprise: the remote servername was defined in capital letters and this prevent me to run  connect to source database and I get a very strange error code tells me there is some network problem, I solved by creating an ALIAS using small letters for that, but it took time to find the reason. 

    2. Regarding the filesystem the xfs was much faster than vxfs  ( still using Veritas)
    It took 20 seconds to create a transaction log with size 400MB on vxfs with a filesystem with 8k block size vc
    It took 3 seconds to create a transaction log size 400MB on xfs with a filesystem with 4k block size
    I was not able to create filesystem with 8k block size, Linux limit ( I guess need to compile the linux with 8k block )  

    3. Hyper-threading
    I did a test to migrate 400 tables using the following config, it took 2,5 h using hyper-threading and 3 h without hyper-threading and server load was mor balanced using hyperthreading
    note: I run 40 parallel jobs, there was always 40 dbaccess running 
    VPCLASS cpu,num=40,noage
    CLEANERS 40
    BUFFERPOOL default,buffers=10000,lrus=12,lru_min_dirty=50.00,lru_max_dirty=60.50
    BUFFERPOOL size=2K,buffers=500000,lrus=31,lru_min_dirty=60.00,lru_max_dirty=70.00
    BUFFERPOOL size=4K,buffers=40000000,lrus=41,lru_min_dirty=60.00,lru_max_dirty=70.00

    I ask customers to test the application using the following config with or without hyper-threading the result was no different
    VPCLASS cpu,num=35,noage
    CLEANERS 30
    BUFFERPOOL default,buffers=10000,lrus=12,lru_min_dirty=50.00,lru_max_dirty=60.50
    BUFFERPOOL size=2K,buffers=500000,lrus=31,lru_min_dirty=1.00,lru_max_dirty=2.00
    BUFFERPOOL size=4K,buffers=20000000,lrus=31,lru_min_dirty=1.00,lru_max_dirty=2.00

    4. Regarding the number of cores, we move from a Solaris zone with 56 thread and 99% of table was in a 8 block size filesystem and 2k page size to 
     Red hat intel with 7 physical cores with hyper-threading on and all business tables are on 4k block size filesystem and 4k page size dbspace and both use veritas, and we get 50% better performance and for some functions we get 10 time better performance. Note that we good performance on Solaris.

    Many thanks

    Best Regards

    //Layth



    ------------------------------
    Layth Yousif
    ------------------------------



  • 6.  RE: Help with Informix 14.10 configuration

    Posted 26 days ago

    Hi,

    1.   onstat -g spi and sort by number of spins. If any lru are near the top the increases lrus, normally 127.

         2.   onstat -D and check how many chunks get siginificant writes, if required increase CLEANERS to reduce checkpoint times/allow more parallel writes.

        3.  VPCLASS cpu, consider  affinity - pin to cpus cores.

        4. BUFFERPOOL reduce min/max dirty to reduce checkpoint times.

    Regards,

    David.



    ------------------------------
    David Williams
    Senior Database Platform Engineer
    Flutter
    London
    ------------------------------



  • 7.  RE: Help with Informix 14.10 configuration

    Posted 26 days ago

    Hi David,

    Regarding number of spins, do you mean Num Waits or Num Loops?



    ------------------------------
    Sincerely,
    Dennis
    ------------------------------



  • 8.  RE: Help with Informix 14.10 configuration

    Posted 25 days ago

    Hi


    Num loops

    Regards,

    David.



    ------------------------------
    David Williams
    Senior Database Platform Engineer
    Flutter
    London
    ------------------------------