Hello Community and good day,
apologies for my belatedfFollow-up.
Looking at the satp rules from the log bundle I provided them with, VMware Technical Support is seeing that there are 2 custom rules one for the iops=1 & the other for the enable_ssd
==================================================
VMW_SATP_ALUA IBM 2145 user tpgs_on VMW_PSP_RR iops=1 IBM arrays with ALUA support
VMW_SATP_ALUA naa.600507681281012be00000000000000d enable_ssd user
VMW_SATP_ALUA naa.600507681281012be00000000000000e enable_ssd user
VMW_SATP_ALUA naa.600507681281012be00000000000000f enable_ssd user
VMW_SATP_ALUA naa.600507681281012be000000000000010 enable_ssd user
==================================================
Because of that, they asked me to remove the existing custom rule and add a generic rule with both options (iops=1) && (enable_ssd) ?
==================================================
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V IBM -M "2145" -c tpgs_on --psp="VMW_PSP_RR" -e "IBM arrays with ALUA support" -O "iops=1" --option "enable_ssd"
==================================================
Although I haven't tried this yet, it does make sense to me to add a generic rule with both options (iops=1) && (enable_ssd) in order to avoid conflicting SATP Rules.
What do you think about that ?
Thanks and Regards,
------------------------------
Massimiliano Rizzi
------------------------------
Original Message:
Sent: Wed February 15, 2023 12:34 PM
From: Nezih Boyacioglu
Subject: Do FlashSystem 5200 systems support a mix of fabric and directly attached hosts at the same time ?
Hi Massimiliano,
I wrote that section in the redbook :)
I have also observed that in some cases the multipath algorithm turn backs to MRU. While I don't know exactly why this happens, I prefer to list all claim rules and remove the rules that may be effective and may conflict.
Regards
------------------------------
Nezih Boyacioglu
Original Message:
Sent: Wed February 15, 2023 11:58 AM
From: Massimiliano Rizzi
Subject: Do FlashSystem 5200 systems support a mix of fabric and directly attached hosts at the same time ?
Hello Community and good day,
just a quick question here while configuring the new FS5200. First, this NVMe storage is a beast :)
As part of tuning the ESXi hosts for optimal IBM FS5200 storage performance according to both the IBM FlashSystem and VMware Implementation and Best Practices Guide and what Nezih Said, we checked that each ESXi sees FS5200 volumes as Flash Disk (instead of HDD) and that multipath algorithm is Round Robin with an I/O Operation Limit value of 1.
As a result, prior to presenting FS5200 volumes we manually added a custom claim rule to each ESXi host in the cluster in order to set the path selection limit and path selection policy settings using the command below:
==================================================
esxcli storage nmp satp rule add -s VMW_SATP_ALUA -V IBM -M "2145" -c tpgs_on --psp="VMW_PSP_RR" -e "IBM arrays with ALUA support" -O "iops=1"
==================================================
Afterwards we presented FS5200 volumes to each ESXi host and ran the "esxcli storage nmp device list" command on each ESXi host in order to confirm that presented FS5200 devices were claimed by the custom claim rule as expected.
As soon as we changed presented FS5200 devices to Flash on each ESXi host (prior to creating the datastore on one ESXi host), the PSP for presented FS5200 devices automatically switched from to "VMW_PSP_RR" to "VMW_PSP_MRU".
We ran the commands below on each ESXi host in order to revert back the PSP to "VMW_PSP_RR", however it appears that the presented FS5200 devices are now claimed by the default custom claim rule for IBM 2145 device, and not by the custom claim rule we added which sets the Round-Robin iops limit set to 1:
==================================================
esxcli storage nmp device set --device naa.600507681281012be00000000000000d --psp VMW_PSP_RR esxcli storage nmp device set --device naa.600507681281012be00000000000000e --psp VMW_PSP_RR esxcli storage nmp device set --device naa.600507681281012be00000000000000f --psp VMW_PSP_RR ==================================================
Definitely sounds like an issue on the VMware side, but I just wanted to check to see whether someone has already observed that.
As usual, thank you in advance for your kind support.
Thank you in advance for your kind support.
------------------------------
Massimiliano Rizzi
Original Message:
Sent: Tue February 07, 2023 03:17 AM
From: Nezih Boyacioglu
Subject: Do FlashSystem 5200 systems support a mix of fabric and directly attached hosts at the same time ?
Hi Massimiliano,
Simple answer is Yes. First you need to disable NPIV to support direct attached hosts on FS5200 which is enabled by default.
a) chiogrp -fctargetportmode transitional 0
b) chiogrp -fctargetportmode disabled 0
and continue your procedure step by step. You also need to check ESXi sees FS5200 volumes as Flash Disk (instead of HDD) and multipath algorithm must be Round Robin. We also recommend round robin iops limit set to 1.
Regards,
------------------------------
Nezih Boyacioglu
Original Message:
Sent: Mon February 06, 2023 04:52 AM
From: Massimiliano Rizzi
Subject: Do FlashSystem 5200 systems support a mix of fabric and directly attached hosts at the same time ?
Hello Community and good day,
my question is: can IBM FlashSystem 5200 systems ***temporarily*** support a mix of SAN switch-attached and direct connect host connections at the same time in a VMware Environment ?
I am trying to figure out if the steps below will work in a scenario comprised of 1x SAN switch-attached IBM FlashSystem 5200 box with Fibre Channel front-end ports and 3x VMware ESXi servers. The final goal is to do end up with 3x direct connect VMware ESXi servers in order to decommission the old SAN switches with no downtime:
==================================================
- place one ESXi host at a time in maintenance mode
- unplug the HBAs from the Fibre Channel switches
- plug the HBAs on the back of the IBM FlashSystem 5200
- resume the ESXi host
- repeat with the other hosts
==================================================
I would like to stress that this will be a ***temporary*** solution.
Any help will be greatly appreciated.
Thanks and Regards,
M.
------------------------------
Massimiliano Rizzi
------------------------------