AIX

AIX

Connect with fellow AIX users and experts to gain knowledge, share insights, and solve problems.

 View Only
  • 1.  Migration from EMC VMAX to IBM FS5300

    Posted Wed November 20, 2024 02:07 PM

    I have a question , is it still possible to use migratepv method to migrate volumes on AIX LPAR  that currently have EMC VMAX luns using EMC Powerpath MPIO device driver , to newly created IBM FS5300 luns that will be mapped to this AIX LPAR ( will there not be a mismatch in storage protocols , like NVMe between the 2 diffrent type storage luns assigned to the AIX LPAR )

    Wood appreciated some feedback , as storage migration is not an option even if by using the virtualization capability of the IBM FS5300 ,

    This has to be done with live migration  , can not afford downtime

    Awaiting your response 



    ------------------------------
    Christie Lourens
    ------------------------------


  • 2.  RE: Migration from EMC VMAX to IBM FS5300

    Posted Wed November 20, 2024 02:14 PM
    On Wed, Nov 20, 2024 at 07:06:56PM +0000, Christie Lourens via IBM TechXchange Community wrote:
    > I have a question , is it still possible to use migratepv method to
    > migrate volumes on AIX LPAR that currently have EMC VMAX luns using
    > EMC Powerpath MPIO device driver , to newly created IBM FS5300 luns
    > that will be mapped to this AIX LPAR ( will there not be a mismatch
    > in storage protocols , like NVMe between the 2 diffrent type storage
    > luns assigned to the AIX LPAR )

    Migratepv is not driver sensitive. You could encounter an issue if you
    try to mix 512 byte vs 4k byte block PV's (ie: nvme). As long as the
    PV can be added to the VG successfully, you should be able to migrate.

    > Wood appreciated some feedback , as storage migration is not an
    > option even if by using the virtualization capability of the IBM
    > FS5300 ,

    I'd suggest double checking the hdisk attributes and path count on the
    new storage before adding those LUNs to the VG.

    I also strongly prefer to do a mirror migration instead of
    migratepv. Syncvg supports throttling and parallel operation and can
    be safely stopped and resumed.

    The steps are roughly:

    - Add new LUNs/PVs to host
    - Confirm PV attributes against existing (queue_depth, algo, reserve)
    - Extendvg to add new PVs to VG
    - Mirrorvg VG to new PVs without sync
    - Syncvg with throttle or parallel LP controls
    - Allow syncvg to finish, VG should have all LV's fully mirrored
    - Unmirrorvg VG against old PVs
    - Reducevg old PVs

    That can be done hot with a standalone host. This assumes there is
    room in the VG for double the PP's (ie: factor, or scalable). If this
    is a cluster or has any storage integration, it gets much more
    complex.

    Thanks.

    ------------------------------------------------------------------
    Russell Adams Russell.Adams@AdamsSystems.nl
    Principal Consultant Adams Systems Consultancy
    https://adamssystems.nl/




  • 3.  RE: Migration from EMC VMAX to IBM FS5300

    Posted Wed November 20, 2024 02:24 PM

    I have done a migratepv a number of time and don't see how this would be different. 

    Another option if you won't run into PP limits is making a full mirror of the VG and exporting it once it is synced, Then use just the new LUNs to make a new VG.

    A some on one SAN device will break SAN snapshots if you use that for backup. mirrorvg will avoid that.



    ------------------------------
    Alexander Pettitt
    ------------------------------



  • 4.  RE: Migration from EMC VMAX to IBM FS5300

    Posted Thu November 21, 2024 04:26 AM

    Both EMC VMAX & IBM FS5300 LUNs attach to AIX using SCSI over Fibre Channel, there is no protocol issue here :) 

    AIX (or any other OS for that matter) won't know what the physcial devices are in the back of the storage array, it has no idea if they are HDD, SSD, Flashcore Module or NVMe as the LUN emulates a SCSI device.  To make it even more interesting, on IBM FS family, with easy tier, a single LUN can be spread over 3 different tiers in the back end of the array, where each tier could be a different type of physical device and using a different type of RAID.

    The main thing is to make sure that you have the MPIO fileset installed for both types of storage so that the multipath works correctly

    Phill.



    ------------------------------
    Phill Rowbottom
    ------------------------------



  • 5.  RE: Migration from EMC VMAX to IBM FS5300

    Posted Thu November 21, 2024 04:54 AM
    Edited by Carl Burnett Mon December 09, 2024 12:27 PM

    My response was crafted with AI assistance, tailored to provide detailed and actionable guidance for your query.

    Migrating volumes from EMC VMAX to IBM FS5300 on an AIX LPAR using migratepv is technically possible, even with the differences in storage systems and protocols, provided a few key conditions are met. 


    1. Compatibility Between Storage Systems

    • EMC VMAX with PowerPath MPIO: EMC PowerPath manages paths to the VMAX LUNs.
    • IBM FS5300: Uses NVMe or traditional FC/iSCSI LUNs with AIX MPIO or other supported drivers.

    AIX itself abstracts the underlying storage protocols and uses device drivers like PowerPath or MPIO to manage these devices. The storage protocol difference (FC for VMAX and NVMe or FC for FS5300) should not interfere with migratepv as long as both LUN types are accessible to the AIX system via their respective drivers.


    2. Prerequisites for migratepv

    To use migratepv for live migration, ensure:

    1. Both Storage LUNs Are Accessible:

      • The EMC VMAX LUNs (via PowerPath) and the IBM FS5300 LUNs (via MPIO or NVMe drivers) must be visible and accessible on the same AIX LPAR.
      • Use the lsdev -Cc disk and lspath commands to verify both sets of LUNs.
    2. No Protocol Conflicts:

      • AIX supports having LUNs from multiple protocols (e.g., FC and NVMe) on the same system. The migratepv command works at the Logical Volume Manager (LVM) layer and does not depend on the underlying protocol as long as the disks are recognized.
    3. Ensure Proper Multipathing:

      • For EMC VMAX, PowerPath should handle the multipathing for the source LUNs.
      • For FS5300, ensure the IBM-provided AIX MPIO driver or NVMe configuration is correctly set up for the target LUNs.
    4. Sufficient Target Space:

      • The target FS5300 LUNs must have enough capacity to accommodate the data from the VMAX LUNs being migrated.
    5. No Open Issues:

      • Ensure no critical errors in errpt or pathing problems (lspath) before starting.

    3. Migration Process

    1. Identify the Volume Group and Disks: Use lsvg and lsvg -p <vg_name> to confirm the volume group and its disks.

    2. Add FS5300 LUNs to the Volume Group:

      bash Copy code
      extendvg <vg_name> <target_fs5300_lun>
    3. Perform migratepv: Run the migratepv command to move data:

      bash
      migratepv <source_emc_lun> <target_fs5300_lun>

      This will migrate the data from the source EMC VMAX LUN to the target FS5300 LUN.

    4. Monitor the Migration: Use lsvg -p <vg_name> and lslv <lv_name> to track the progress of the migration.

    5. Remove the Source LUNs: After confirming that all data has been migrated:

      bash
      reducevg <vg_name> <source_emc_lun>

      You can then unmap the source EMC LUNs if no longer needed.


    4. Considerations for NVMe Protocol

    • If the FS5300 LUNs use NVMe over Fabrics (NVMe-oF), ensure the appropriate AIX NVMe drivers are installed and configured.

    • Confirm NVMe device visibility using:

      bash
      lsdev -Cc nvme
    • Protocol differences should not matter at the LVM level, as migratepv operates on volume groups and logical volumes.


    5. Risks and Testing

    • Concurrency: Test migratepv on a non-critical volume or test system to ensure there are no conflicts between PowerPath and MPIO/NVMe drivers.
    • Performance Impact: Live migration can temporarily affect I/O performance. Monitor closely using topas, nmon, or similar tools.
    • Fallback Plan: Always have a fallback or rollback plan in case of unforeseen issues.

    Conclusion

    Yes, it is possible to use migratepv to migrate from EMC VMAX (via PowerPath) to IBM FS5300 (NVMe or FC), as long as both LUN types are visible and properly configured on the AIX LPAR. The protocol mismatch at the storage level does not affect the LVM operations, making this approach feasible for live migration with no downtime.



    ------------------------------
    Saif Sabri
    ------------------------------