PowerVM

 View Only

Viosupgrade - One click migration to VIOS 3.1

By MALLESH LEPAKSHAIAH posted Tue July 28, 2020 06:49 AM

  

1. Introduction

IBM introduced a new version-release of the Virtual IO Server (VIOS), VIOS 3.1, based on AIX 7.2 (TL-03), in November 2018.  VIOS 3.1 improvements focused primarily on the following areas:

  • Security & Resiliency
  • Cloud ready
  • Modernization
  • Performance

More details of this new release are available in VIOS 3.1 Overview blog  

Given that the VIOS 3.1 is rebased to AIX 7.2 (formerly AIX 6.1), the process of updating existing VIOS’s to VIOS 3.1 involves a “new and complete overwrite install” under the covers. Much of the focus for this deliverable was to provide tooling to assist in a simple and reliable migration process.

 

In order to exploit all the new features introduced in VIOS 3.1, customers need to migrate to this version.  It’s always been challenging for customers updating their VIO Servers to the newer versions as it involves taking backup of the complete configuration as well as re-configuring it back after the update. This consumes a lot of time and effort as any failure in this process would increase the down time of the production VMs managed by the VIO Servers.  As the complexity of the configuration and number of VIO Servers increases, the time and effort required also increases. In order to simplify this process and enable smooth migration,  a new tool called “viosupgrade” has been introduced along with VIOS 3.1. This blog provides details on the viosupgrade tool, its usage and the common problems that users might face during the update process. It also covers different solutions in addressing those problems.

2. VIOS 3.1 Upgrade process

The VIO Server upgrade process is different compared to other OS upgrade process. This is because VIO Server plays an important role in IO virtualization. IO virtualization includes several configurations to virtual and physical devices, virtual device to physical device mappings and hosting virtual and physical devices drivers. In addition to virtual IO capabilities, it’s the platform for Shared Storage Pool clusters (SSP), Live Partition Migration (LPM), Remote Restart and Virtual machine Disaster recovery and High availability capabilities. It’s very important for our customers to keep the complete VIOS metadata during the upgrade process and carry forward the complete configurations to the new VIOS version after the installation. Since VIOS installation as part of upgrade process wipes out the entire VIOS metadata, it’s always preferable to take the VIOS metadata backup prior to the installation and restore this metadata after the installation. Hence following 3 steps are recommended for general VIOS upgrades.

  1. VIOS metadata backup collection through “viosbr -backup” interface
  2. Installation of the new desired version of VIOS
  3. Restoration of the VIOS metadata through “viosbr –restore” interface

Note: VIO Server has a CLI interface called “updateios” where backup and restore of VIOS metadata is not necessary. However, “updateios” supports only VIOS TL update operations and not upgrade operations between VIOS major versions like version 2 to version 3. VIOS upgrade between major versions are always of type new and require complete installation. Hence it’s necessary to take the VIOS metadata backup prior to the upgrade process.

3. VIOS Upgrade Methods

Following are the methods available to update/upgrade VIOS 

3.1 updateios

This command is useful to update VIOS Technology levels within the same versions of VIOS such as 2.2.x.x levels or 3.1.1x levels. Whereas this command can’t be used for the migrations across different versions such as 2.2.x.x to 3.1.x.x. For more details refer to updateios command.

Note: This is not supported for migration to 3.1.

3.2 Manual upgrade

This method involves user manually taking the VIOS metadata backup prior to the installation and restoring the metadata after the installation. For more details refer to Manual Backup-Install-Restore – SSP and Manual Backup-Install-Restore – non-SSP.

3.3 viosupgrade – new tool

VIOS upgrade tool “viosupgrade” interface automates the entire VIOS upgrade process. This tool allows installation of VIOS from NIM as well as through self. viosupgrade tool is available in two different variants.

  1. NIM – viosupgrade : NIM based installations
  2. VIOS – viosupgrade : VIOS Auto installation (Non-NIM based installation)

 

4. viosupgrade – NIM(Network Install Manager)

This section describes VIOS installations using viosupgrade command from the NIM Server

4.1 NIM Pre-requisites for VIOS Upgrade installations

Following Directory & Environmental setup is mandatory on NIM Server for VIOS Upgrades through viosupgrade command. 

4.1.1. Directory Setup

Create the following directory tree structure if not already created, in order to store different set of files

/export/nim    - This is  used to store bosinst data, ios_backup

/export/mksysb – This is the standard location used to store mksysb images

/export/spot – This is the standard location used to store spot images

Note:  This directory structure is recommended, but not mandatory. viosupgrade tool works even if the directory structure is different but it expects /export/nim directory for creating backup.

4.1.2. Environment Setup

Following steps depict the environmental setup on NIM Server.

  1. Install necessary remote management filesets – bos.sysmgt.nim.master and configure the NIM environment. These filesets are necessary for NIM server to communicate to the VIO Servers . For more details on setting up NIM environment refer to nim master setup and basic config.
  2. Install dsm.core fileset, if not installed already.
  3. Define HMC

Skip this step, if HMC details are already defined. You can verify this by running “lsnim -t hmc” command. HMC can be defined via command line or through smitty. Follow the steps below:

  • Create a passwd file for ssh passthrough to hmc as follows:
  • mkdir -p /export/dsm/passwd
  • dpasswd -f /export/dsm/passwd/<hmc_name>.pswd -P <hmc_password> -U <hmc_user>
  • Example: dpasswd -f /export/dsm/passwd/hmcviodev1.pswd -P abc123 -U hscroot
  • Define HMC Management object by one of the following methods:
  1. Command line Method:

nim -o define -t hmc -a if1="find_net <hmc_FQDN> 0" -a passwd_file=/export/dsm/passwd/<hmc_name>.pswd <hmc_nam

Example: nim -o define -t hmc -a if1="find_net hmcviodev1.pok.stglabs.ibm.com 0" -a passwd_file=/export/dsm/passwd/hmcviodev1.pswd hmcviodev1

  1. Smitty method:

smitty nim_mgmt_obj —> Define a Management Object —> Select “hmc” —> update details of hostname of HMC, password file (/export/dsm/passwd/<hmc_name>.pswd), network details (if needed).


1. Define the CEC

You can skip this step, if you have already defined the CEC. You can verify this by running the command “lsnim -t cec"

  • All the CECs connected to the HMC defined in step 3 can be defined using the following command:

nimquery -a hmc=<hmc_name> –d

  • Single CEC can be defined using the following command:

nim -o define -t cec -a hw_serial=<serial_num> -a hw_type=<type> -a hw_model=<model_num> -a mgmt_source=<hmc_name> <CEC_name>

Example: nim -o define -t cec -a hw_serial=1000F2P -a hw_type=8233 -a hw_model=E8B -a mgmt_source=hmcviodev1 8233-E8B_1000F2P

2.
Define VIOS

If the VIOS that needs to be installed is already defined you can skip this step.
You can check whether the VIOS is defined by running “lsnim -t vios” command

Command-line method:

nim -o define -t vios -a if1="find_net <vios_name> <mac_address>” –a mgmt_source=<cec_name> -a identity=<lpar_id> <vios_name>

Example: nim -o define -t vios -a if1="find_net vios1 98ABEHGF” -a mgmt_source=hmcviodev1 8233-E8B_1000F2P -a identity=11_vios1

Smitty method:

smitty nim_mgmt_obj —> Define a Management Object —> Select “vios” —> Provide hostname of the machine —> Select its cec, Give Identity as lpar_id, Provide network details (if network is not already defined), Provide Network Adapter Hardware Address(mac address)

Note: If the primary IP configured is SEA, then mention a different physical network interface resource which is connected to the network . This will help NIM to establish the communication with VIOS to complete the installation.


3. Define ios_mksysb

Copy the VIOS mksysb images to /export/mksysb directory. Following methods are available to create the ios_mksysb resource.

Command-line method:

nim -o define -t ios_mksysb -a location=/export/mksysb/<build_name>_ios_mksysb -a server=master <build_name>_ios_mksysb

Example: nim -Fo define -t ios_mksysb -a server=master -a location=/export/mksysb/ 1820A_aix72L_VIO.img  1820A_aix72L_VIO

Smitty method:

smitty nim —> Perform NIM Administration Tasks -> Manage Resources -> Define a Resource -> Choose ios_mksysb from the list -> Provide ios_mksysb name, select “master” for “Server of Resource”, Provide mksysb image file location /export/mksysb/<mksysb image name> for “Location of Resource”

Note: VIOS mksysb images can be created using different methods explained in later sections of this blog.

4. Define spot

Copy the spot image to /export/spot directory. Define spot resource by using one of the following methods

Command-line method:

nim -o define -t spot -a server=master -a location=/export/spot/<build_name>_ios_mksysb_spot -a source=<build_name>_ios_mksysb <build_name>_ios_mksysb_spot

Example: nim -o define -t spot -a server=master -a location=/export/spot/1820A_aix72L_VIO_spot -a source=db_1820A_aix72L_VIO 1820A_aix72L_VIO_spot

Smitty method:

smitty nim -> Perform NIM Administration Tasks -> Manage Resources -> Define a Resource -> Choose spot from the list -> Provide spot name, select “master” for “Server of Resource”, select respective ios_mksysb from the list for “Source of Install Images”, Provide “/export/spot/<spot_name>” for “Location of Resource”


5. Define file_res for resource definitions

A file_res resource is where NIM allows for resource files to be stored on the server. When the resource is allocated to a client, a copy of the directory contents is placed on the client at a location that is specified by the dest_dir attribute. This should be defined if we need any specific files from rootvg to be copied from VIOS .
Following steps define this file_res resource:

  1. Create a directory in NIM

             mkdir –p /export/nim/viosupgrade/copyfiles_<vios_name>

  1. Define file_res resource for this VIOS. “location” represents the directory on VIOS after installation where the files will be copied to from NIM master.dest_dir” represents the location on master where files will be copied from VIOS current rootvg to it. “location resource for this VIOS. “file_resDefine

nim -o define -t file_res -a location=/export/nim/viosupgrade/copyfiles_<vios_name> -a dest_dir=/home/padmin/backup_files -a server=master file_res_<vios_name>

  1. Create corresponding directories and Copy the files from VIOS current rootvg to NIM

Command 1: mkdir /export/nim/viosupgrade/copyfiles_<vios_name>/etc

Command 2: mkdir -p /export/nim/viosupgrade/copyfiles_<vios_name>/home/padmin

Command 3: scp –r root <vios_name>:/etc/environment /export/nim/viosupgrade/copyfiles_<vios_name>/etc

Command 4: scp –r root <vios_name>:/home/padmin/smit.log /export/nim/viosupgrade/copyfiles_<vios_name>/home/padmin

 

4.2 Viosupgrade – usage

viosupgrade has two types of install options in NIM:

4.2.1. Bos installation – bosinst option
This method does complete reinstall on the disk provided. VIOS will be down during the installation process. Ensure the disks have atleast 30GB space.

  • bosinst - SSP

Command: viosupgrade -t bosinst -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk> -c

Example: viosupgrade -t bosinst -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -c

  • bosinst - non-SSP

Command: viosupgrade -t bosinst -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk

Example: viosupgrade -t bosinst -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1

 

  • To specify multiple disks for installation

Command: viosupgrade -t bosinst -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk>:<hdisk>

Example: viosupgrade -t bosinst -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1: hdisk2

 

  • If you want to specifically backup and copy any files to the VIOS after the installation, you can use the following command:

Command: viosupgrade -t bosinst -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk> -e file_res_<vios_name>

Example: viosupgrade -t bosinst -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -e file_res_jaguar13

Note: This file_res resource can be created as mentioned in section a, step 8.

 

4.2.2. Alternate disk install – altdisk option

This will preserve the existing disk and its rootvg and will install on the alternate disk. The VIOS will be up and running during the installation process. The disk mentioned should be free. Ensure the disk(s) have atleast 30GB space.

  • altdisk - SSP

Command: viosupgrade -t altdisk -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk> -c

Example: viosupgrade -t altdisk -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -c

 

  • non SSP – altdisk

Command: viosupgrade -t altdisk -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk>

Example: viosupgrade -t altdisk -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1

 

  • To specify multiple disks for installation

Command: viosupgrade -t altdisk -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk>:<hdisk>

Example: viosupgrade -t altdisk -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1: hdisk2

  • If you want to specifically backup and copy any files to the VIOS after installation, you can use the following command:

Command: viosupgrade -t altdisk -n <hostname> -m <mksysb_image> -p <spot_name> -a <hdisk> -e file_res_<vios_name>

Example: viosupgrade -t altdisk -n vios1 -m vios_3.1.0.0 -p vios_3.1.0.0_spot -a hdisk1 -e file_res_jaguar13

Note: This file_res resource can be created as mentioned in section a, step 8.

 

4.2.3. To get the status of the installation:

viosupgrade -q <viosname>
Example:
viosupgrade -q vios1

For more details on syntax and usage refer to viosupgrade command.

 

Note: (Applicable to VIOS part of SSP Cluster migrating to version 3.1) In order to upgrade to VIOS version 3.1 from 2.2.6.30 or later, SSP cluster status must be “ON_LEVEL”. You can verify the status of the cluster through “cluster -status -verbose” output. If the status is “UP_LEVEL”, then your cluster nodes (VIO Servers) are not ready for migration to version 3.1.

4.3 Supported Levels

NIM should be at AIX 7.2 TL 3. Following table represent the support of using viosupgrade tool in NIM, based on the current VIOS level and target level to which user wants to migrate:

 

Current Level

Target Level

Viosupgrade -bosinst

Viosupgrade-altdisk

VIOS (Non-SSP/SSP)

2.2.6.32 or later

3.1.0.0 or later

Yes

Yes

< 2.2.6.32

3.1.0.0 or later

No

No

 

4.4 Common Configuration problems

4.4.1. NIM-VIOS communication error:
If you get error “<hostname>: Unable to reach VIO from the NIM.”, this indicates that VIOS (<hostname>) is not enabled for management by NIM. Run the following command in VIOS:

  • “remote_management -disable” in padmin prompt.
  • Run “ remote_management -interface <interface name> <NIM Master name>” in padmin prompt. This will start the nimsh daemon.
    • Example: remote_management -interface en2 safari10

Note: Please ensure that the correct interface name is given .  Provided Interface name provided should not be a SEA(Shared Ethernet Adapter).

  • Verify if the communication between NIM and VIOS is working by running the following command on NIM:
  • nim -o check <vios object name>
    • Example: if vios object created is “VIOSA” then run “nim -o check VIOSA” . If the connection is good, return value will be 0.
  • You can retry the operation after the above steps.
  • Even after performing steps mentioned in above, if you see the failure again, follow the steps below:
  • Check for any errors like “error: remote value passed, '<short_hostname>', does not match environment value <long_hostname>” in /var/adm/ras/nimsh.log in VIOS. This indicates there is a hostname mismatch problem in VIOS.
  • Verify the /etc/hosts file and make the hostname changes appropriately.
    • For Example, following is the /etc/hosts entry,

9.114.248.148   safari10
Modify this to
9.114.248.148   safari10.pok.stglabs.ibm.com safari10
Note: If you face any similar issues, verify the logs at /var/adm/ras/nimsh.log in VIOS for any errors.

  • Retry the operation after fixing the hostname errors.
  • VIOS Password files will not be preserved after installation. It is the User’s responsibility to handle the passwords appropriately

5.  viosupgrade -  VIOS Auto installation

5.1 viosupgrade command

VIOS can upgrade itself to higher VIOS levels by using viosupgrade command on VIOS.

It’s supported only from VIOS 2.2.6.30. Unlike NIM viosupgrade, only altdisk method is supported here. For more details on syntax and usage refer to viosupgrade command.


To start the installation:

Command: viosupgrade -l -i <mksysb image> -a <hdisk>

Example: viosupgrade -l -i vios3.1_mksysb -a hdisk1

 

To know the status of the viosupgrade , run

viosupgrade -l -q

To create the mksysb image file by using the ISO image files:

viosupgrade -I ISOImage1:ISOImage2 -w directoryPath



Note:

  • After installation, previous rootvg disk(s) will be renamed to “old_rootvg” . If user wants to fall back to the previous version of the VIOS, set the bootlist (bootlist -m normal <old_rootvg diskname>) and reboot the VIOS.
  • To find out the list of free disks available for installation, run “lspv -free” command.
  • If free disks are not available, and the disks are not currently in use,  you can run “cleandisk -s <diskname>” followed by “chpv -c <diskname>” commands to free up the disks. Please be aware that these commands wipe out the entire data on the provided disk(s).

5.2 Common problems/solutions:

5.2.1. VIOS is part of the cluster -  Error “Cluster state in not correct”.

This indicates that cluster is not in right state. This error can happen when upgrade is triggered on all  the nodes of the cluster at the same time . The recommended method is to have atleast one operational active node in the cluster while upgrade is going on in other nodes of the cluster, i.e Maximum n-1 SSP Cluster nodes only can be upgraded at the same time.

Another possible cause could be, the cluster services are down on that node . Please check the status of the cluster services on the node and take appropriate actions.

 

5.2.2. If the viosupgrade is terminated abnormally in the middle of installation, altinst_rootvg created during installation will not allow future viosupgrade operations.

You can follow the steps below to cleanup(remove/rename) the altinst_rootvg created during the installation:

  • To remove the "altinst_rootvg" volume group

    # lspv | grep rootvg

    hdisk0              123456789123455                     rootvg           active

    hdisk99999          123456789123456                     altinst_rootvg

    # alt_rootvg_op -X altinst_rootvg

    # chpv -C hdisk99999

  • To rename the "altinst_rootvg" volume group:

    # lspv | grep rootvg

    hdisk0              123456789123455                     rootvg           active

    hdisk99999          123456789123456                     altinst_rootvg

    # alt_rootvg_op -v new_altinst_name -d hdisk99999

    # lspv | grep new_altinst_name

    hdisk99999              123456789123456                     new_altinst_name    

Now you can restart the viosupgrade with the disks you want to install.

5.2.3. viosupgrade status shows “RESTORE FAILED”
“viosupgrade -l -q” shows the status as “RESTORE FAILED” , you can do a manual restore with the following command:

For non-SSP:

viosbr -restore -file <BackupFilename>

For SSP:

viosbr -restore -file <BackupFilename> -clustername <clustername> -type net -curnode
viosbr -restore -file <BackupFilename> -clustername <clustername> -curnode

Note: Backup files are present in /home/padmin/cfgbackups. BackupFilename will be of the format <hostname>_filename.tar.gz

5.2.4. Restore Failure error

After system upgrade,  viosupgrade status shows restore failed with the following message “Atleast one of the PV backing devices from backup, is not restored" . Following is the snapshot of the error message which will be seen as part of viosupgrade status output:
 

 Follow the steps below to verify the issue:

  • Look into "/home/ios/logs/restore_trace.out” latest logs based on timestamp and check for “rc=90” .
  • Check “/home/ios/logs/LPM/vsmig.log”  for “rc=90”. The log messages will look like below:


Above error indicates there is a mismatch in the reserve policy of the disk before and after installation. In this case, updated VIOS has the reserve policy for hdisk40 as “single_path” whereas before installation it was “no_reserve”. Only if both the policies are same, restore will be able to do re-mapping during it’s restoration process. This has resulted in restore failure.

Note: The message highlighted in bold are the important ones to look at. The messages highlighted in bold are for explanation purpose only.

  • You can verify the reserve policy of the disk by running “devrsrv -c query -l hdisk40”

Following is the sample output:
Device Reservation State Information

==================================================

Device Name                     :  hdisk40

Device Open On Current Host?    :  NO

ODM Reservation Policy          :  SINGLE PATH RESERVE

Device Reservation State        :  NO RESERVE


This gives the current reservation policy as “SINGLE PATH RESERVE” as mentioned in the vsmig.log.

These errors explained above are common when multipathing (SDDPCM) software are installed on the VIOS before viosupgrade is triggered. Since default mksysb doesn’t have these software(additional software not part of base image), you might end up with these restore failures.

Follow the steps below to correct the restore failures:

1)  Install the SDDPCM Software OR Run “chdev -l hdisk40  -a reserve_policy=no_reserve”

This changes the reserve policy and help in successful restore without installation of   SDDPCM software.

2) Run “viosbr” command manually to restore the configuration. Refer to viosbr man page for more details.

6. Mksysb image creation:

You can create mksysb images from DVD images or customized images with additional software (software which is not part of the base VIOS image).

6.1 VIOS mksysb image from the DVD image


1. DVD images comes in two volumes say dvdimage.viso and dvdimage.v2.iso. These images are in compressed format and cannot be directly used to browse through. Hence these volumes should be mounted using loopmount command.

Command: loopmount -i /tmp/dvdimage.v1.iso -o "-V cdrfs -o ro" -m /mnt

where /tmp/dvdimage.v1.iso is the location of dvd 1st volume, /mnt is the mount point.


2. Copy the mksysb image file located at usr/sys/inst.images of the dvd to a file
Command: cp -p /mnt/usr/sys/inst.images/mksysb_image /tmp/dvd1_2/mksysb_image

3. If there are more mksysb image files then concatenate their content to single file, which is created in step2. Repeat this until all mksysb image files are concatenated.
Commands:
  • cat /mnt/usr/sys/inst.images/mksysb_image2 >> /tmp/dvd1_2/mksysb_image
  • cat /mnt/usr/sys/inst.images/mksysb_image3 >> /tmp/dvd1_2/mksysb_image
  • ………….
4. Now unmount the first volume and loopmount second volume of the DVD.

Commands:

  • umount /mnt
  • loopmount -i /tmp/dvdimage.v2.iso -o "-V cdrfs -o ro" -m /mnt

Where /tmp/dvdimage.v2.iso is the location of dvd 2nd volume


5. Concatenate the content of all mksysb image files, that may exist, to the same file created in step 2.

Commands:

  • cat /mnt/usr/sys/inst.images/mksysb_image >> /tmp/dvd1_2/mksysb_image
  • cat /mnt/usr/sys/inst.images/mksysb_image2 >> /tmp/dvd1_2/mksysb_image
  • …………
7. Unmount the DVD

Command: umount /mnt

Note: If there are more DVD volumes then repeat the same steps from 5 to 7.          
The new mksysb file serves as mksysb image file for installation of a VIOS. This newly created mksysb image can be used with viosupgrade command on a VIOS or on a NIM master.
 

6.2 VIOS mksysb image with additional software not part of base image

Customized VIOS image can created by installing additional software(not part of base image). Multipath application drivers, Security Profiles, Performance Monitoring Tools, etc. To create a customized image, you need to install the VIOS using the IBM provided image on a VIOS partition and install the desired software whatever is applicable to your environment. Customized VIOS mksysb image can be created using the backupios command and same can be used to deploy on all the VIO Servers across the Datacenter.

Command: backupios -mksysb -file <filename.mksysb>
For more details refer to backupios command.

 

Upgrade Precaution

It is the user’s responsibility to upgrade each VIOS in a manner that minimizes impact to their environment – that is ensuring that redundant VIOS nodes are taken down one at a time where the 2nd node  should be operational until the successful completion of the upgrade process on the 1st node. It is also the user’s responsibility to shut down any client systems that have rootvg disk mapped through vscsi devices. Failure to gracefully shut down these nodes could lead to LPAR crash.

8. Unsupported Cases:

8.1 Full cluster restore in a single instance is not supported

viosupgrade tool supports backup and restore of VIOS at cluster level with “-c” option. However, full cluster restore in a single instance is not supported in this release. So, in case of SSP cluster, irrespective of number of nodes, user has to upgrade few nodes at a time keeping cluster up and running on other nodes. Failure to do so, will result in losing the cluster connectivity.

Example: In case of 4 node cluster, it’s acceptable to upgrade 1/2/3 node/s keeping at least 1 node active in the cluster. After the successful installation of first set of nodes, user can choose to upgrade second set of nodes in the cluster.

Note: This is applicable to single node cluster as well. So, in case of single node cluster, it’s not possible to use viosupgrade tool to upgrade and restore the cluster. User is expected to handle this manually. Alternatively, user can add one more node to the SSP cluster before initiating the upgrade process on the first node.

8.2 Rootvg LV backed vscsi disk backup restore not supported

Currently viosbr doesn’t support vscsi disks created on VIO Server’s rootvg disk/s. Hence viosupgrade tool can’t restore vscsi mappings if LVs are created from rootvg. It’s the user’s responsibility to move the vscsi disks from the rootvg to other VGs prior to initiating the upgrade process. Alternatively, users can initiate the installation on alternate disk by preserving the current rootvg.

Use LVM commands (cplv) to migrate these vscsi LVs as explained in the below link.

http://www-01.ibm.com/support/docview.wss?uid=isg3T1000167

Additional Information

  1. VIOS migration/upgrade methods
  2. VIOS 3.1
  3. Upgrading to VIOS 3.1

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions

0 comments
93 views

Permalink