round robin iops limit parameter might not affect performance over 4k iops, it's balancing paths and have benefits below 4k iops. You can also use latency centric nmp.
Original Message:
Sent: Fri August 30, 2024 02:49 PM
From: A. S.
Subject: FlashSystem 7300 PB-HA performance issue
Strange thing.
Changing parameter "Round Robin iops limit" for this ESXi hosts from default value (1000) to 1 didn't gave no result.
I have same result at 1 and 1000.
------------------------------
A. S.
Original Message:
Sent: Thu August 29, 2024 01:54 PM
From: A. S.
Subject: FlashSystem 7300 PB-HA performance issue
I have 2x32Gb FC per host. Ok, tomorrow I'll make test via 3 host.
And of cause I made all by this recommendation from redbook:
:)
------------------------------
A. S.
Original Message:
Sent: Thu August 29, 2024 01:42 PM
From: Nezih Boyacioglu
Subject: FlashSystem 7300 PB-HA performance issue
Hi,
No it's not true. Your 100k iops with 32k block size looks like your hosts limit. is your host hba's 16Gbps? I asked this because if you configure your ESXi host properly 2x16Gbps port gives you 3GB/s throughput which is the host hba limit.
For the better benchmark results I recommend you to work with 3 hosts with 32Gbps FC HBA's. if you are using VM, resources is another issue. You must use Vmware paravirtual scsi adapter instead of LSI Logic SAS for the disks of this VM. Thick Eager Zeroed also recommended. Multipath must be Round Robin for the datastore and I also recommend you to change "Round Robin iops limit" for this ESXi hosts from default value (1000) to 1.
------------------------------
Nezih Boyacioglu
Original Message:
Sent: Thu August 29, 2024 01:23 PM
From: A. S.
Subject: FlashSystem 7300 PB-HA performance issue
Hello!
Today I deconfigured HA and continued testing at one system to exclude HA influence.
I created 6 luns and create one more VM. Now two VMs resides at one physical host.
My setup:
vdisks of 1st VM resides at luns 1, 3 and 5.
vdisks of 2nd VM resides at luns 2, 4 and 6.
And I got that result:
This much better than 100K IOPS, but I desire more :)
When I added two more luns:
vdisks of 1st VM resides at luns 1, 3, 5 and 7.
vdisks of 2nd VM resides at luns 2, 4, 6 and 8.
I got same 340K IOPS but latency increased to 1.2ms.
Tomorrow I'll continue testing with more VMs and more physical hosts. And maybe more luns.
P.S.
Unverified source tell me that IBM FLashSystem have limit by one lun at 100K IOPS. It's true?
------------------------------
A. S.
Original Message:
Sent: Thu August 29, 2024 10:48 AM
From: Nezih Boyacioglu
Subject: FlashSystem 7300 PB-HA performance issue
Hi A.S.
is this one host result? you can configure your vdbench to run a benchmark with multiple hosts.
------------------------------
Nezih Boyacioglu
Original Message:
Sent: Wed August 28, 2024 03:45 PM
From: A. S.
Subject: FlashSystem 7300 PB-HA performance issue
Hello!
I just configured two FS7300 (v8.7.0) with policy-based High Availability. Made this by this guide
We have two sites, each contains one FS7300, 12 host with ESXi 8 and two brocade switch.
Connected via two fabrics.
Configured partnership, storage partitions, luns, host, locations, etc. All this by guide.
After this I decided check performance by IOPS tests. I used FIO and vdbench and got poor results. Both utilities gave same results.
Test parameters: blocksize 8k, randomRW, 70R/30W.
And I got about 100k IOPS
Then I tried change blocksize to 32k and got same about 100k IOPS:
I think that my system (FS7300) by some reason limit IOPS by 100k. Because independently from block size I got same IOPS.
Please give me a hint which settings should be checked first.
I feel that reason of low IOPS are simple but find this reason not can yet :)
And now I go recheck all settings at VMware to exclude it influence.
P.S. Each IBM FS7300 contains two 4-port 32Gb FC cards, 18 drives 19.2TB FCM (DRAID6).
------------------------------
A. S.
------------------------------