Hello Satid
I don't know if you are familiar with Storwize (when I was working it was the name of the devices family more or less managed by an SVC, such as V7k, V9k, ..., I am not aware of the name IBM is using right now) based storage devices, so let me explain what was the setup.
A Storwyze device provides two controllers, with only one active at any time. This allows controller failures or software update/upgrade without disruption. Best practices are to use both controllers for any volume. The result for the paths is that you double the number of paths from an LPAR to the LUNs, half of the paths are active (e.g. using the active controller), half of the paths are passive (using the other controller).
HyperSwap provides the capability to connect two storage devices, configure synchronous replication in both ways for selected volumes, create virtual volume for each pair of real volumes (this specific attribute was the reason of a tricky copy description configuration step for PowerHA on IBM i, but this is another story and maybe, this is easier now). In an HyperSwap relationship, only one device is active at any time (from the volume point of view) (replication exists only from the active device to the other), but all paths to the inactive device must exist, in passive state for the IBM i LPAR, so that an automatic failover (including replication reversing) operation applies in case of a failure of the active device.
With a simple setup, I mean 1 VIOS using 1 FC adapter, and without using HyperSwap, you have 2 paths, 1 active path to the active controller of the storage device, 1 passive path to the inactive controller. Here, you have redundancy only if you loose the active storage device controller.
Now, if you use 2 distinct FC adapters, connected to two distinct switches (and preferably zoned to distinct ports on the storage device) on the VIOS, you get 4 paths (2 active to the active controller and 2 passive to the inactive controller). You add redundancy at the switch (and FC adapter) level.
Of course, you will setup dual VIOS, both with similar configuration. And you get the 8 paths (4 active to the active controller and 4 passive to the inactive controller). This is the configuration that we can see in Rob's post. You add redundancy at the VIOS level.
And now, after setting up HyperSwap with another storage device, you get my 16 paths, still the same 4 active paths to the active controller of the active device, 4 passive path to the inactive controller of the active device, and 8 passive paths to the inactive device (4 to each controller).
With this setup, you have multiple redundancy levels. Of course, you can loose a single one (a FC adapter, a VIOS, a switch, a device controller or an entire device) level without disruption. But you can loose up to 3 items at the same time and the LPAR can still run (for instance, at the same time, a VIOS, a switch, a device controler or even an entire device).
Regarding the VIOS reboot time, we are talking here of NPIV configuration. So, virtual to physical FC adapters mapping does not have any significant impact.
Hope that I replied to your question and did not increase any confusion :-)
------------------------------
Marc Rauzier
------------------------------
Original Message:
Sent: Mon September 09, 2024 02:39 AM
From: Satid S
Subject: VIOS reboot time after update
Dear Marc
>>>> And got up to 16 paths on an installation I designed several (6 or 7) years ago. 4 were active and 12 were passive. <<<<
Thanks for sharing your experience but I'm quite curious as to what the "compelling reason" is to configure 12 passive paths instead of just 4? Is the specific use of HyperSwapfor IBM i the cause to configure 12 passive paths? If so, what the reason is for configuring these excessive passive paths?
------------------------------
Satid S
Original Message:
Sent: Sat September 07, 2024 01:33 PM
From: Marc Rauzier
Subject: VIOS reboot time after update
And got up to 16 paths on an installation I designed several (6 or 7) years ago. 4 were active and 12 were passive. The storage device was configured to provide HyperSwap functionnality to IBM i PowerHA clusters using LUN level switching, based on NPIV.
It was based on this redbook "IBM Storwize HyperSwap with IBM i" and was using the same configuration as you show, e.g., for each IBM i LPAR, 2 VIOS, 2 distinct physical FC cards on each, 2 distinct FC switches, and on each V7k (I don't remember the exact model), two controllers with two distinct FC ports.
At this time, we were running V7R3 and I wanted to confirm that IBM was supporting this setup. I never got an official statement, but when carefully reading Maximum Capacities documentation, one can that see there is a small difference between V7R2 and V7R3:
V7R2 (and older): Maximum number of connections to a logical unit or disk unit in an external storage server or Virtual I/O Server environment: 8
V7R3 (and newer): Maximum number of active connections to a logical unit or disk unit in an external storage server or Virtual I/O Server environment: 8
The setup was using only 4 active paths (so less than 8) in any case, but the unique response I got was something like "We always do our best to help customers", and I was an IBMer at this time :-)
------------------------------
Marc Rauzier
Original Message:
Sent: Fri September 06, 2024 07:52 AM
From: Robert Berendt
Subject: VIOS reboot time after update
What? Eight paths to disk is not the common way to do it?
Display Disk Path Status
Serial Resource Path
Entry ASP Unit Number Type Model Name Status
1 1 1 Y438C4000054 2145 050 DMP060 Active
2 Y438C4000054 2145 050 DMP117 Active
3 Y438C4000054 2145 050 DMP175 Passive
4 Y438C4000054 2145 050 DMP001 Active
5 Y438C4000054 2145 050 DMP059 Passive
6 Y438C4000054 2145 050 DMP118 Passive
7 Y438C4000054 2145 050 DMP176 Active
8 Y438C4000054 2145 050 DMP002 Passive
9 1 2 Y438C4000071 2145 050 DMP292 Active
...
Let's see...
Two LPARs of VIOS.
Two 2 port FC cards per VIOS LPAR.
Two fiber channel switches between the Power system and the SAN.
Add a couple of ports on the SAN...
I've done VIOS maintenance, and FC switch maintenance, and kept running. And I've upgraded the SAN OS midday, midweek.
------------------------------
Robert Berendt IBMChampion
Business Systems Analyst, Lead
Dekko
Fort Wayne
260-599-3160
Original Message:
Sent: Thu September 05, 2024 04:29 AM
From: Satid S
Subject: VIOS reboot time after update
Dear Andrey
>>>> I had VIOS with ca. 8000 paths to disks (thanks to EMC). The reboot took ca. 30-40 minutes. <<<<
This is such an interesting and surprising fact to hear about.
>>>> 8 paths per disk <<<<
Did you ask EMC if 4 paths per disk could possibly be configured instead? If I was the customer, I would adamantly have asked for explanation why 8 paths were needed because I did not want to waste my investment without knowing if I received any particularly special benefit. I cannot see how 8 paths per disk would deliver any special benefit over 4 paths. Did EMC explain why specifically 8 paths were needed?
------------------------------
Satid S
Original Message:
Sent: Wed September 04, 2024 05:26 PM
From: Andrey Klyachkin
Subject: VIOS reboot time after update
Yes, afair 8 paths per disk, ca. 1000 disks mapped via VSCSI to client LPARs. Because of fine EMC drivers, you have one device for each path and one additional device for the disk. Instead of one device you have 9 devices and if you have 100 disks, you will have 900 devices. If each device would take 0,5 second in cfgmgr, the whole boot time will increase by 450 seconds - ca. 7-8 minutes.
------------------------------
Andrey Klyachkin
https://www.power-devops.com
Original Message:
Sent: Wed September 04, 2024 05:40 AM
From: José Pina Coelho
Subject: VIOS reboot time after update
8000 is a lot. 8 paths per disk?
Are you mapping them directly to VSCSI or using an SSP ?
------------------------------
José Pina Coelho
IT Specialist at Kyndryl
Original Message:
Sent: Wed September 04, 2024 02:56 AM
From: Andrey Klyachkin
Subject: VIOS reboot time after update
and number of disks and all other devices. I had VIOS with ca. 8000 paths to disks (thanks to EMC). The reboot took ca. 30-40 minutes.
------------------------------
Andrey Klyachkin
https://www.power-devops.com
Original Message:
Sent: Tue September 03, 2024 01:15 PM
From: Russell Adams
Subject: VIOS reboot time after update
On Tue, Sep 03, 2024 at 05:10:02PM +0000, Justin Francis via IBM TechXchange Community wrote:
> And normally, how much time does a VIOS reboot takes after the update?
That can vary based on the speed of the system, the latency of the
boot disks, the number of PCI cards, ports, and what they connect to.
Ideally 3-5 minutes.
Each disconnected HBA port can add 60 seconds, depending on the model.
No need to do a full power off, just reboot the VIO LPAR. That's faster.
------------------------------------------------------------------
Russell Adams Russell.Adams@AdamsSystems.nl
Principal Consultant Adams Systems Consultancy
https://adamssystems.nl/
Original Message:
Sent: 9/3/2024 1:10:00 PM
From: Justin Francis
Subject: RE: VIOS reboot time after update
Thank you so much Russell!!
And normally, how much time does a VIOS reboot takes after the update?
Regards
Justin
------------------------------
Justin Francis
Original Message:
Sent: Tue September 03, 2024 01:02 PM
From: Russell Adams
Subject: VIOS reboot time after update
Justin,
If you have a single VIOs, you should take a full outage of all LPARs
to upgrade VIO. This is the safest option.
Thanks.
On Tue, Sep 03, 2024 at 04:53:05PM +0000, Justin Francis via IBM TechXchange Community wrote:
> Hi All,
>
>
> I am planning to perform VIOS update from 3.1.3.14 to 3.1.4.41. Could anyone please let me know, how much time does a reboot take to apply the updates?
>
>
> Also, if it doesn't take that much time, could I leave the client systems (IBM i's, AIX and Linux) online to let them pick the disk and network connection after VIOS boots up?
>
>
> Note: I have single VIOS environment.
>
>
> Regards
>
>
> Justin Francis
>
>
> ------------------------------
> Justin Francis
> ------------------------------
>
>
> Reply to Sender : https://community.ibm.com/community/user/eGroups/PostReply?GroupId=6073&MID=419340&SenderKey=4f8575c6-7c78-4f31-b370-c4eab0d3c361
>
> Reply to Discussion : https://community.ibm.com/community/user/eGroups/PostReply?GroupId=6073&MID=419340
>
>
>
> You are subscribed to "PowerVM" as Russell.Adams@AdamsSystems.nl. To change your subscriptions, go to http://community.ibm.com/community/user/preferences?section=Subscriptions. To unsubscribe from this community discussion, go to http://community.ibm.com/HigherLogic/eGroups/Unsubscribe.aspx?UserKey=c23dfccc-9910-40ae-beeb-fdcbced5bf1f&sKey=KeyRemoved&GroupKey=08137a30-97b3-4ae4-8d8d-fcf834e4f06e.
------------------------------------------------------------------
Russell Adams Russell.Adams@AdamsSystems.nl
Principal Consultant Adams Systems Consultancy
https://adamssystems.nl/
Original Message:
Sent: 9/3/2024 12:53:00 PM
From: Justin Francis
Subject: VIOS reboot time after update
Hi All,
I am planning to perform VIOS update from 3.1.3.14 to 3.1.4.41. Could anyone please let me know, how much time does a reboot take to apply the updates?
Also, if it doesn't take that much time, could I leave the client systems (IBM i's, AIX and Linux) online to let them pick the disk and network connection after VIOS boots up?
Note: I have single VIOS environment.
Regards
Justin Francis
------------------------------
Justin Francis
------------------------------