>>>> Yesterday, using the VIOS Performance Advisor tool, as suggested by Satid, we noticed a "strange" behavior in the case of Pure, where, it seems, all the I/O is coming out of just one of the FC ports, despite having 04 ports configured, and, in this one port, occupancy reaches 100%. <<<<
It appears you may now be closer to identifying the cause of disk performance issue which is likely to be that MPIO is not working. Please check out this IBM i Technote to see if it is useful in helping you do the check or not: How to verify IBM i Disk Multipath Status at https://www.ibm.com/support/pages/how-verify-ibm-i-disk-multipath-status. Comparing what you see in PoC LPAT against that in your DR LPAR may help with the checking.
You should create 4 (or at least 2) client-side vSCSI adapters on IBM i side for multipathing. I remember this is automatic for recent releases of VIOS when you create a server side vSCSI adapter and this is true for your case. But I cannot find any info on whether we need to additionally configure multipath in IBM i for vSCSI adapters from VIOS. You should try find a redbook on VIOS and its vSCSi support.
Original Message:
Sent: Fri May 03, 2024 07:22 AM
From: Marcos D. Wille
Subject: IBM i performance with vSCSI external storage
Hello Virgile,
We have created a controlled environment for PoC and both partitions have the same disk distribution, i.e. both with 08 LUNs of ~640 GB.
The primary objective is not exactly to have the best possible performance, but to compare the performance of two identical lpars in terms of resources (processor and memory) using two different storage solutions, one searching for disks in Pure and the other searching in a scenario that we are already used to, in this case, disks in v7000. We were sure that the performance on flash storage (Pure) would be much better, but we were surprised by this result.
Yesterday, using the VIOS Performance Advisor tool, as suggested by Satid, we noticed a "strange" behavior in the case of Pure, where, it seems, all the I/O is coming out of just one of the FC ports, despite having 04 ports configured, and, in this one port, occupancy reaches 100%. It could be a problem in the multipath of Pure's ODM. We'll study this a bit more and I'll share the results.
VIOS01 - Performance Advisor
VIOS 02 - Performance Advisor
------------------------------
Marcos D. Wille
Original Message:
Sent: Thu May 02, 2024 06:24 PM
From: Virgile VATIN
Subject: IBM i performance with vSCSI external storage
Hum,
don't know if it can solve your issue, but from different IBM university on external storage (and for internal storage), IBM i works better with more arms (disks). it's better to have a lot of small disk than few big disk for example 50 disk of 100 GB run better than 8 * 640 GB. this remains true with Flash Core Module.
In wrksysact what do you see (job) and I/O?
do you have the same config on your V7000 ?
Regards
------------------------------
Virgile VATIN
Original Message:
Sent: Thu May 02, 2024 02:49 PM
From: Marcos D. Wille
Subject: IBM i performance with vSCSI external storage
Hello Virgile,
There are 8 luns of 640 GB each, totaling ~5TB.
------------------------------
Marcos D. Wille
Original Message:
Sent: Thu May 02, 2024 01:33 PM
From: Virgile VATIN
Subject: IBM i performance with vSCSI external storage
Hi,
Is this 8 lun of 640 GB (8*640) or 8 lun of 80GB ?
Regards
------------------------------
Virgile VATIN
Original Message:
Sent: Thu May 02, 2024 10:50 AM
From: Marcos D. Wille
Subject: IBM i performance with vSCSI external storage
Hello everyone,
More details about PoC:
Satid:
This PoC is running on a Power8 server, 8286-42A (our D/R environment).
In this server we have two VIOS, on level: 3.1.4.31, with 01 dedicated processor core and 16 GB RAM each.
Each VIOS has 3 FC cards (feature code: 5273, 5735, 577D, EL2N, EL58), FW on last level too.
The IBM i is on V7R4 TR8 (CUMULATIVE PTF PACKAGE C3117740), with 01 Capped Shared Processor and 01 Virtual Processor and 128 GB of RAM.
During testing, I monitor VIOS CPU with NMON, and they don't exceed 10% of usage. But, % of disk usage is about 50% (half for write and half for read operations).
Regarding the creation of LUNs in Pure Storage, we don't monitor it, we just request the creation of 08 LUNs with 640 GB.
From what we were told, the process is similar to the V7000: the LUNs are created and presented to a "host" that points to the WWPNs of VIOS FC . The SAN is then zoned to present the LUNs to the VIOS.
To deliver them to the IBM i partition, we used the "Virtual Storage Management" menu in the HMC and directly mapped the "physical" volumes to the vSCSI device ID of the IBMi lpar, this was done in each of the VIOSes.
Tsvetan:
Following the recommendation of the Pure staff (and also IBM), we installed the ODM for Pure on both VIOS, and all disks shipped by Pure use this driver, as you can see below (output of "lsdev -Cc disk"):
hdisk2 Available 01-00-01 PURE MPIO disk (Fiber)
hdisk3 present 01-00-01 PURE MPIO Drive (Fibre)
hdisk4 present 01-00-01 PURE MPIO Drive (Fiber)
hdisk5 Available 01-00-01 PURE MPIO Drive (Fiber)
hdisk6 Available 01-00-01 PURE MPIO Drive (Fiber)
hdisk7 Available 01-00-01 PURE MPIO Drive (Fiber)
hdisk8 Available 01-00-01 PURE MPIO Drive (Fiber)
hdisk9 Available 01-00-01 PURE MPIO Drive (Fiber)
We've just double-checked the recommended parameters in the links you sent us, and they're all as recommended
Best Practice for AIX :
Install ODM Modification on VIO Server and AIX and validate that the following parameter are set properly
• Algorithm to Shortest_queue : chdev -l hdiskX -a algorithm=shortest_queue
• Failover to Fast_fail: "chdev -l fscsi0 -a fc_err_recov=fast_fail –P"
• dyntrk "chdev -l fscsi0 -a dyntrk=yes –P"
• queue_depth=256 (set by Pure ODM module)
• max_transfer to 0x400000
Vincent:
Using the v7000 to virtualize was our initial idea, even, to "clone" the LUNs we would need, but the Pure people told us it wouldn't be possible with the v7000, so we didn't delve into that solution.
Regards,
------------------------------
Marcos D. Wille
Original Message:
Sent: Tue April 30, 2024 01:34 PM
From: Marcos D. Wille
Subject: IBM i performance with vSCSI external storage
Hello everyone!
We have started a PoC to test the connection between Pure Storage and our "IBM i" environment.
Our environment consists of IBMi lpars connected to V7000 storage via NPIV (with tiering enabled in three tiers: nl-SAS, SAS and SSDs). We also have Linux/SLES/SAP lpars (or VMs), but they won't be part of the PoC at the moment.
Unfortunately, there is no communication between IBM i and PureStorage via NPIV, so with the resources we currently have, we opted for a configuration with SAN networking between PureStorage and the VIOS servers, and then via vSCSI with the IBM i lpars.
Four 8Mbps FC physical ports were zoned, two for each VIOS, each on a SAN fabric.
We created ONE virtual SCSI server adapter in each VIOS to communicate with ONE SCSI client adapter at IBM i partition.
With this configuration, we achieved slower response times than with our v7000 storage (in fact, we kept two identical partitions in terms of processor and memory, one for v7000 and one for PureStorage for this PoC). We ran tests on batch and online business process execution, we also compared response times on IPLs and also on disk to disk backup operations (SAVLIB and SAVOBJ to *savf). In all cases, we obtained the same or worse times with PureStorage. This was a bit of a surprise, given that Pure Storage is all-flash, while our v7000 is 'hybrid' storage (mechanical discs and SSD - which in itself is inferior to flash I/O perf).
One detail that caught our attention was the output of the WRKDSKSTS command, where the "% Busy" column was always above 50%, quite unusual in our experience with the v7000, which very rarely exceeds 15%.
To remove any doubt about some kind of contention and/or capacity limit on the virtual adapters, we created 3 more virtual SCSI adapters in each VIOS. We finished the tests with 04 adapters in each VIOS and 08 in the IBMi partition, leaving only 02 LUNs in each adapter, but the times were even worse, not very representative, but worse than with only 01 adapter per VIOS and 02 (one for each VIOS) in the IBMi partition.
Has anyone experienced this situation? Is this really the expected result?
Best regards!
------------------------------
===============
Marcos Daniel Wille
===============
------------------------------