Informix

 View Only
  • 1.  Informix NUMA shared memory access

    Posted Thu April 16, 2020 07:12 AM
    Are there anyone running Informix on VMWare, and also on HP SuperDome Flex? Or any NUMA architecture at all? I'm interested in shared memory access patterns across the NUMA nodes. We are running an instance with approximately 400GB of shared memory on an HP SuperDome Flex, Intel Gold CPUs (12 cores on each).

    We have tried different configurations, but are actually seeing much better performance with 24 cores than 36 or 48, because with more cores, we have more memory access to NUMA nodes on other sockets, and the difference is dramatic (which is perhaps not a surprise once you understand what is going on under the hood). 

    Our hardware supplier claims that an other big database vendor starting with O has zero problems scaling over many NUMA nodes without performance degradation. I take those statements with a grain of salt, but it could be that there are some tricks to be played if the database server is NUMA aware, and knows the NUMA distance to the different memory regions. It certainly looks like Informix is blind to those differences, and it is more or less random where the memory is allocated. 

    I mentioned VMWare, and that can play a role here, but we have more or less eliminated VMWare from the equation with careful configuration. Not to say it is completely out of the picture.

    We are now pondering whether we should switch to Intel Platinum 24-core CPUs or maybe 28-core. We will be sacrificing some clock speed, but it looks like we will have faster access to memory for more cores, and will probably outweigh any clock speed difference.

    Possibly also CPU affinity could play a role, we are running with affinity, but there could be a chance that the scheduler knows about NUMA distances, and could use that to determine which core to use for what task. But probably best if the Informix thread scheduler (for lack of better terminology) knew about these things.

    It would be great to get in contact with experts in this area. I'm not a hardware expert myself, and have had to relearn a lot of stuff I didn't think I needed to remember anymore. Even greater if there is someone inside HCL (or IBM) with deep knowledge in these matters.

    ------------------------------
    Øyvind Gjerstad
    Developer/Architect
    PostNord AS
    ------------------------------

    #Informix


  • 2.  RE: Informix NUMA shared memory access

    Posted Thu April 16, 2020 07:16 AM

    That's a question for Vladimir K. ��

     

    I had similar issues a few years ago on IBM PowerLinux and got the R&D involved, but I did not have the final word on it.

     

    Eric Vercelletto
    Data Management Architect and Owner / Begooden IT Consulting
    Board of Directors, International Informix Users group
    IBM Champion 2013,2014,2015,2016,2017,2018,2019,2020
    ibm-champion-rgb-130px

    Tel:     +33(0) 298 51 3210
    Mob : +33(0)626 52 50 68
    skype: begooden-it
    Google Hangout: eric.vercelletto@begooden-it.com
    Email:
    eric.vercelletto@begooden-it.com
    www :
    http://www.vercelletto.com
    www  https://kandooerp.org

    image001.jpg@01CDC3E9.1425CBB0

    image002.jpg@01CDC3E9.1425CBB0

    image003.jpg@01CDC3E9.1425CBB0

     

     






  • 3.  RE: Informix NUMA shared memory access

    Posted Thu April 16, 2020 09:42 AM

    Not entirely sure how virtualization handles affinity groups. There may be a way to allocate cpu + memory from the same node (or limited number of nodes) for a VM and that would be desirable configuration.

    Here is a Linux example on how to run Informix using resources from specific node on "bare metal"

    1. find out CPU numbers and memory size for specific node using "numactl"

         numactl -H

    ....

    node 0 cpus: 0 2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46
    node 0 size: 195269 MB

    .....

    2. configure Informix so that total memory footprint (buffer pool + virtual segment) would not exceed amount of memory for the node, and number of CPU VPs matches number reported for the node.

    3. (re) start Informix  using numactl command, for this case it will be -

         numactl --cpunodebind=0 --membind=0 $INFORMIXDIR/bin/oninit -v

    4. One would think that it's all that's needed, but Informix resets CPU mask at startup, so it's needed to set it back to run on selected node, which can be done on Linux with some scripting around "taskset":

        proc="0,2,4,6,8,10,12,14,16,18,20,22,24,26,28,30,32,34,36,38,40,42,44,46"

        onstat -g glo | sed -n '/Thread/,/tot/p' | grep -v usercpu | awk 'NF>7 {print "taskset -p -c",c,$2 }' c="$proc" | sh -x

    The CPU mask can be verified using taskset, for the described case:

    taskset -p 262362
    pid 262362's current affinity mask: 555555555555

    The only remaining thing is to make sure that significant (application) activity is not running on CPUs used by Informix.
     

    This method can be extrapolated for running Informix on 2 or more nodes to see what works best for your specific workload.

    Caveats - if Informix tries to add more memory than is available for node(s) selected by numactl - Linux will not allow it.
    Set SHM total appropriately or make sure that additional allocations don't happen.


    I guess I have to comment on "other big database vendor  has zero problems scaling". That's not entirely true. It's well documented fact that efficiency (performance per computing unit- core) is the best when running on resources with max affinity for high volume transactional workloads. However some workloads, ones that suitable for the Informix Warehose Accelerator ("IWA") will scale much better across nodes. I suspect that SAP HANA type workloads will also have reasonable scaling across nodes.

    Another thing to pay attention to is presence of other bottlenecks, which may be present due to application design, storage configuration and layout, or maybe other factors.  My favorite example from early days if internet is placing a hit counter on the web page - which decreased ability to serve this page by several orders of magnitude. Having a small table with few rows being hit by every session may have approximately same effect. Likewise - if I/O is a bottleneck, throwing more CPU resources at Informix will not improve performance. Partitioning the data with the right fragmentation strategy may also need to be given some thought. 

    For current processor capabilities, the "working set" (frequently accessed data) MUST be cached in memory, the I/O simply cannot keep up. Having all-flash storage, if fitting hot data in memory is not possible, helps to a degree, but it's best to look at the data and maybe normalize to reduce working set size.

    "should we switch to Intel Platinum 24-core CPUs or maybe 28-core ..." - I would expect that Informix running on one 28 core processor (with memory from the same node) will run better than on 2 x 14 core processors, but since a lot depends on the workload - that remains to be seen.  

    Hope this helps a bit.



    ------------------------------
    Vladimir Kolobrodov
    ------------------------------



  • 4.  RE: Informix NUMA shared memory access

    Posted Wed January 05, 2022 10:47 PM
    In the actual stress test, I find that the memory of a single NUMA node is too small.
    If IDS uses the memory and CPU of two NUMA nodes, the value of benchmark SQL is much higher.

    ------------------------------
    ZhiWei Cui
    GBASE
    ------------------------------



  • 5.  RE: Informix NUMA shared memory access

    Posted Thu January 06, 2022 12:02 AM

    > memory of a single NUMA node is too small

    Well, as with everything in performance you need to figure what is your primary bottleneck first.

    When your "working set" does not fit in memory (Informix buffer pool) then, most likely, your workload will be I/O bound and the effect of NUMA will be secondary to that.

    So it would make perfect sense that when you add (allow Informix to use) more memory - performance improves.

    Same goes for CPU. If you only have 2 - 4 cores per node, then using ALL available processing resources without regard for NUMA may work better for performance.

    In addition, NUMA affects different types of workloads differently.

    Practically - if, when configuring Informix to use more CPUs, you see higher CPU utilization but no proportional increase in performance, then you might want to look at optimizing affinity of the resources used by Informix, that's when NUMA awareness can be useful.

    And, in general, testing how different configurations work for you is the right thing to do always.

    Appreciate your comments!






    ------------------------------
    Vladimir Kolobrodov
    ------------------------------



  • 6.  RE: Informix NUMA shared memory access

    Posted Thu January 06, 2022 05:09 AM
    In fact, my stress test server is 4 or 8 NUMA nodes.
    benchmark SQL4. 1.1
    1000 warehouse data, 500 or 1000 concurrent sessions.
    I tested on different NUMA servers and found that Informix had the best performance when using 2 NUMA nodes.

    ------------------------------
    ZhiWei Cui
    GBASE
    ------------------------------



  • 7.  RE: Informix NUMA shared memory access

    IBM Champion
    Posted Fri January 07, 2022 03:41 AM
    Hi,

    it is extremely dependent on the hardware architecture (for example Intel Platinium versus Gold CPU).

    I will soon run comparative benchmarks on the Apple M1x chipset.
    New NVIDIA architectures are also interesting.

    In any case, the memory throughput itself is currently a greater limit than the CPU frequency. 

    Henri

    ------------------------------
    Henri Cujass
    leolo IT, CTO
    Germany
    IBM Champion 2021
    ------------------------------