Overview
Policy-Based High Availability (PBHA) initially started with FC SCSI with 8.6.1.0 release and iSCSI hosts with 8.7.2.0 release. Starting with 9.1.1 release, it now includes support for NVMe over FC and TCP hosts.
This allows NVMe-FC or NVMe-TCP hosts to be configured inside storage partitions. With this, the hosts using these protocols can take the advantage of active-active High Availability within a IBM FlashSystem Grid or outside of it.
As shown in figure 1, a partition is configured with volumes that are mapped to a VMWare ESXi host (The host-object configuration can be over NVMe-FC or NVMe-TCP protocols) and can be part of host cluster optionally.

Figure 1: ESXi host mapping over NVMe-FC/NVMe-TCP
Essential prerequisites to consider:
• User-defined portsets must be configured on the management site (AMS) and non-active management site (non-AMS) sites.
• These portsets must be linked between the sites
• Hosts must have access to both the sites
• AMS and non-AMS sites must support NVMe host-attachment with supported adapters and FlashSystem code versions

Figure 2: PBHA configuration overview
The above figure 2, depicts the overall configuration view. There are two FlashSystem clusters, Active management site (AMS) and non-active management site (non-AMS). Fiber Channel or RDMA based partnerships can be used for communication between two clusters for high availability configuration along with the requirement of an active IP quorum.
The AMS and non-AMS connect to VMware ESXi hosts through the fabric, and the host connection could be either NVMe-FC or NVMe-TCP hosts
On the host side, this can be a standalone ESXi or a VMware vSphere Metro Storage Cluster (vMSC) configuration, and both configuration work seamlessly
The hosts are accessible from both sites and host-objects and volumes mapped to the host object is reflected across the sites.
Redundant connectivity is established between all systems to ensure high availability and fault tolerance
Configuring a NVMe-FC or NVMe-TCP Host
So, how does the user configure/add NVMe-FC/NVMETCP host-object?
Inside a partition on the ‘Add Host' option, now user can see 2 more options, “Fibre Channel (NVMe)” and “TCP (NVMe)” listed under available options. You can select the appropriate host connection option and enter the required details — hostname, NQN, and linked portsets. Only portsets eligible for the selected protocol will be displayed. Ensure that these portsets are already linked to the partner system, which is one of the prerequisites covered above

Figure 3: Adding a host
Once the host object is configured on the AMS site, the protocol type NVMe-FC/NVMe-TCP and port definitions are displayed. Post this map volumes to these host objects.
Once this is done, the same host object and protocol configuration details are also automatically created on the non-AMS site, ensuring availability across both sites
vSphere View — Mapped Volumes
Coming to the host side, in the vSphere client, user can view the mapped volumes under 'Storage Devices'.
• Volumes that are mapped to an NVMe/TCP host object appear as NVMe TCP disks
• Volumes mapped to an FC-NVMe host object, which appear with the name NVMe Fibre Channel disk

Figure 4: Volumes mapped on host
NOTE: This blog provides a high-level overview of the feature. It is not intended to replace official documentation, which should be referred to for detailed information on limits, restrictions, and configuration best practices
Refer IBM documentation for limits and restrictions before implementation
Reference Material
Implementing a High-availability solution: https://www.ibm.com/docs/en/flashsystem-9x00/9.1.1?topic=replication-high-availability
Configuring host: https://www.ibm.com/docs/en/flashsystem-9x00/9.1.1?topic=concepts-hosts