PowerVM

PowerVM

Connect, learn, share, and engage with IBM Power.

 View Only
Expand all | Collapse all

Migration of two vio's on one frame where EMC (san) odm definitions need to be installed. A high level plan.

  • 1.  Migration of two vio's on one frame where EMC (san) odm definitions need to be installed. A high level plan.

    Posted yesterday

    Hello,  I'm looking for either validation or input on this high level plan to upgrade two vio servers where we need to also install EMC (san) ODM Definitions.

    hopefully someone has specific experience with an upgrade involving ODM definitions installations. So here is the plan for your input. thank you.

    ** Upgrade Migration of vio-A and Vio-B from vio 3.1.4.31 to vio 4.1.1.10 **
     
    Scenario: There is one frame on this EMC san/storage pool. We need to install latest EMC odm definitions with the upgrade. vio-A is using half the resource of the frame with vio-B using the other half of resources. Disk for rootvg are mapped as PV's from the san, not in a pool, it is not a LU. In order to create a mksysb containing installed EMC ODM definitions, we plan to temporarily shut down vio-A and harvest it's resources to create the 'temp-vio'.  temp-vio will be assigned ram, cpu, network, from vio-A. A fresh disk will be used for the os on temp-vio install. it does not need to match in size as the  vio-A rootvg PV, hdisk2. The mksysb image of temp-vio will be used in the vios alt-disk migration to upgrade vio-A and vio-B   
     
     
     
    * [ ] clear error report on a test lpar withing the vio-A, vio-B, cluster
     
    * [x] request disk be allocated to vio-A from san exact same size as hdisk2 (rootvg disk vio-A).
    ``` 
    $ lspv -size hdisk2
    153600
    ```
     
    * [ ] Needed by san team:
      *  for i in fcs0 fcs1 fcs2 fcs3 do lscfg -vpl $i |grep Network ; done 
     ```
    Network Address.............################ 
    Network Address.............################
    Network Address.............################
    Network Address.............################
    ```
     
    * [ ] media in place on nfs resource
       * //vio/... 3.1.4.60 files is the spot for the BFF files (for doing vio upgrades 
       * //iso/vio v4.1.1.10 vio * used to create emc golden image, mksysb, along with the EMC Definitions.
       * EMC ODM Definitions  :  DellEMC_AIx6.3.0.2.tar.Z
     
    * Stop cca on vio-A
       * [ ] On the DBN: `# clstartstop -stop -n clustername -m vio-A`
    * [ ] Shutdown down vio-A via hmc
     
    * [ ] Deploy a net new vio, 'temp-vio'
      * [ ] install 4.1.1.10 iso obtained as media from ibm as new vio server
      * [ ] Install fix packs? via flrt?
      * [ ] Assign resources to temp-vio from vio-A
         * [ ] cpu, ram, network
      * [ ] Boot and log in to temp-vio and establish tcpip networking
     
    * [ ] Deploy EMC ODM definitions on vio-C
      * [ ] DellEMC_AIx6.3.0.2.tar.Z
      * [ ] The ODM fileset package supports EMC storage arrays for AIX versions 6.1, 7.1, and 7.2
          * [ ] Confirm if they mean the lpars themselves or the vios they are installed on. we have aix 7.3 lpars running on some servers.
     
    * [ ] Create 'golden image' mksysb file from temp-vio
      * [ ] backupios command: `$ backupios -file filename.mksysb -mksysb`
      * [ ] Write image to NFS mount
     
    * [ ] In the HMC, shutdown temp-vio
    * [ ] realocate resources to vio-A
          * cpu, ram, networking
    * [ ] start up vio-A from HMC
     
    * [ ] Run custom script to back up vio-A config files
     
    ** VIOUPDATE STEPS **
     
    * Add vio-A back in to the cluster and start cluster services on vio-A
       * [ ] `# cluster -addnode -clustername clustername -hostname vio-A`
       * [ ] On the DBN: `# clstartstop -start -n clustername -m vio-A`
     
    * [ ] back up vio-A metadata by using `viosbr -backup` 
     
    * stop cluster services on vio-A
       * [ ] On the DBN: `# clstartstart -start -n clustername -m vio-A`
     
    * mksysb is used to migrate vio-A (log in via console)
      * [ ] viosupdate command and syntax
    * [ ] Restore vio-A metadata by using the viosbr -restore
     
    * [ ] ssh login to the DBN node of the cluster (vio-B): `cluster -status |grep -p DBN`
     
    * start cluster services on vio-A
       * [ ] On the DBN: `# clstartstart -start -n clustername -m vio-A`
        
    * Repeast VIOUPDATE for vio-B. 
         * [ ] Create a mksysb backup for vio-B first 
         * [ ] Follow steps from above for VIOUPDATE STEPS except it is already in the cluster.
         
    * [ ] Check for error reports on lpar-A
     
    * Rename pv's to be consistent with other across vio-A and vio-B, servers in the cluster. Use the disk rename script
      * [ ]  ./pv_rename.sh


    ------------------------------
    Gary Bowdridge
    ------------------------------


  • 2.  RE: Migration of two vio's on one frame where EMC (san) odm definitions need to be installed. A high level plan.

    Posted yesterday

    Hello

    Ask SAN team to map new disk with same size or higher from the current VIO server boot and then perform clone & migration on the new disk.

    Once VIO upgraded on new SAN disk and rebooted from new VIO level , proceed with installing new definition of EMC .in this way you have the old_rootvg disk ( old VIO level) for rollback if faced any issue



    ------------------------------
    Anas AlSaleh
    IBM Power Systems Software Specialist
    Saudi Business Machines ( SBM )
    Riyadh
    ------------------------------