Primary Storage

 View Only
  • 1.  Informix and FS5200

    Posted Thu December 08, 2022 10:56 AM

    In a new installation there is a Lenovo server connected to a FS5200 through a SAN24B-6 switch (both 16G), which will replace a server that is in production.

    RedHat 9.0 and INFORMIX are installed on the server, the monthly closing was carried out as a test before going into production, but it took 3 hours more than the current server in production that has SSD disks.

    Disk write tests were also performed

    The results:

    New Lenovo server connected to the FS5200:


    [root@dbssrv02 backup]# dd if=/dev/zero of=/backup/test1.img bs=1G count=1 oflag=dsync

    1+0 records in

    1+0 records out

    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.22652 s, 875 MB/s

     [root@dbssrv02 backup]# dd if=/dev/zero of=/backup/test2.img bs=512 count=1000 oflag=dsync

    1000+0 records in

    1000+0 records out

    512000 bytes (512 kB, 500 KiB) copied, 0.469169 s, 1.1 MB/s

    Current production server with SDD drives (the one to be replaced)

    [root@dbssrv02 /]# dd if=/dev/zero of=/test1.img bs=1G count=1 oflag=dsync

    1+0 records in

    1+0 records out

    1073741824 bytes (1.1 GB, 1.0 GiB) copied, 1.06968 s, 1.0 GB/s

    [root@dbssrv02 /]# dd if=/dev/zero of=/test2.img bs=512 count=1000 oflag=dsync

    1000+0 records in

    1000+0 records out

    512000 bytes (512 kB, 500 KiB) copied, 0.0892114 s, 5.7 MB/s

    Do you have a graph or  link  showing storage performance (FS5200), transfer rate in MB/s vs file block size, or Informix best practices with FS5200?

    Jorge Eduardo Barriga Rios


  • 2.  RE: Informix and FS5200

    Posted Fri December 09, 2022 03:58 AM
    Edited by System Wed March 08, 2023 03:53 PM
    Hello Jorge,

    first I did not want to answer because there is so much information that would still be important to make an appropriate statement.
    But then a certain interest arose and I am curious what others write about it.

    Information that would be important among others:
    * How is the storage system equipped and installed?
    * What media is used and how many, what raid size? SCMs, SSDs, FCMs; 9+2P+S...
    * Expansions?
    * How much memory? 64, 256, 512GB - Write Cache is always 12GB
    * Which firmware?
    * If different media, is EasyTier used.....
    * Which pools? Standard or DataReduction?
    * How full is the storage system?
    * How many physical connections from storage to switch and from switch to server? Max 1.75 gigabytes per second per physical connection.
    * How many servers are connected to the storage
    * How many volumes are presented to the server?
    * What is the multipath configuration? multipath -ll
    * How is the LVM configured?
    * What does the zoning look like and how is the switch configured?
    * Switch information like congestions or port errors
    * What does the workload look like? What you are showing us is a single thread throughput test and probably does not reflect what Informix is doing on the storage system. Better use fio or DB specific tool
    * Is it monitored? Maybe Spectrum Control
    , Storage Insight Pro, Stor2rrd? Is there any data from the Informix workload like IO Size, Read/Write Ratio, Write Cache problems?

    There is a sizer from IBM that always assumes an optimal environment and you can fill it with many parameters.

    Simply said 100% write IO without caching, and streaming effects at 12 FCMs and 16Gb/s, I think 3GB/s should be the best you can expect.
    According to the data sheet, IBM specifies 21GB/s in a fully expanded cluster, which is a quarter for a single system. Therefore 5.25GB/s

    Best practices are relatively simple:
    * Standard pools
    * As many media in storage as possible (12)
    * Many volumes to the server (number of cores of the storage system) >= 16
    * Max 80% fill level for flash storage
    * Automatic distribution of volumes to nodes
    * Min 4 physical paths to SAN and min 2 paths from server to SAN
    * Fixed speeds in SAN - no autonegotiation
    * For LVM - check multipath and connect volumes from storage via LVM, pay attention to striping, single stripe size and round robin
    * Same applies if database internal functions can be used
    * For experienced users adjust the queue depth (max queue depth / volumes) (consider server and storage)

    It was fun to write this together and as mentioned above, I'm curious what others contribute.

    Greetings Patrik

    Patrik Groß

  • 3.  RE: Informix and FS5200

    Posted Fri December 09, 2022 04:02 AM

    "According to the data sheet, IBM specifies 21GB/s in a fully expanded cluster, which is a quarter for a single system. Therefore 5.25GB/s"

    Of course, this is not the maximum write performance, it should be much lower.

    Patrik Groß

  • 4.  RE: Informix and FS5200

    Posted Fri December 09, 2022 12:53 PM

    Thank you very much for the answer
    All equipment is new

    There are only three connected servers for now.

    the connection is as follows

    The features of the FS5200 are:

     Memory: 256 GB

     8 HD  -  4.8 TB SSD FCM NVMe

     Firmware code_level: (build 157.13.2208031717000)
    The configuration:


























































    The configuration in Linux was done as indicated in the following link

    In the following redbook does not refer to the Informix

    Jorge Eduardo Barriga Rios

  • 5.  RE: Informix and FS5200

    Posted Fri December 09, 2022 12:55 PM

    Hola Jorge!

    I don't have any specific suggestion but tried the same command in my not-modern Lenovo Host connected to my FS5200, just to confirm we should be expecting more. My FS5200 has a TRAID6 array with 6 NVMe drives and 256GB per node.  I tried on 2 volumes, one is compressed and the other one is fully allocated.

    Manufacturer: LENOVO
    Product Name: System x3650 M5 -[5462AC1]
    Serial Number: 06FVHEE
    Memory: 128GB
    FC HBA/ FC switch: 32GB

    This is the result:

    [09:51:55] gdlsvt4-RHEL7p9-77-146:~ # dd if=/dev/zero of=/fs_fa/test1.img bs=1G count=1 oflag=dsync
    1073741824 bytes (1.1 GB) copied, 1.65853 s, 647 MB/s

    [09:52:33] gdlsvt4-RHEL7p9-77-146:~ # dd if=/dev/zero of=/fs_tp/test1.img bs=1G count=1 oflag=dsync
    1073741824 bytes (1.1 GB) copied, 1.64655 s, 652 MB/s

    I don't have informix to test.

    Luis Lopez

  • 6.  RE: Informix and FS5200

    Posted Mon December 12, 2022 04:15 AM
    Hola Jorge, Hola Luis,

    I'll say it directly: Don't use TRAID!

    Since DRAID is available all TRAID configurations (execpt RAID1) are obsolete. Sometimes still available, but DRAID is strongly recomended.

    The second thing is, if you want to use Informix for the performance testing, then you would have to optimize the settings for good results. I don't know how, but I know that the specs that Patrik posted are - let's say - normal values, But if the application is not set right, the best storage cannot solve it.

    And about your "dd" settings, Luis: Maybe you use smaler blocksize and on the other site more counts. And if you have more than one volume attached than start on dd per Volume. For example, the FlashSystem works wit 256 KB block size.


    Đorđe (in spanish Jorge ;-)

    Dorde Knezevic

  • 7.  RE: Informix and FS5200

    IBM Champion
    Posted Wed December 14, 2022 01:57 PM
    The last section of Dorde's reply is correct. Your dd attacks to the storage with 1G each IO request. You have 2 x 16G connection to the san switch, each can deliver 1,5GB/s approx. If your multipath configuration is using load balancing methodology you can achieve 3GB/s total. This means just 3 IO requests utilizes the whole bandwidth.

    First of all you need to know correct workload (io pattern) of your Informix (block size, read write ratio)
    and then you should use ddbench instead dd and give your informix io pattern and time to ddbench tests this io pattern on your storage system. 
    Actually I use range of block sizes like 8 (classical oltp workload blocksize), 16, 32, 64 and 128 (most of the backup softwares uses 128k blocksize sequential read for systems). 

    Nezih Boyacioglu