IBM FlashSystem

IBM FlashSystem

Find answers and share expertise on IBM FlashSystem


#Storage
 View Only

Live KVM Migration with IBM FlashSystem providing high-availability and resiliency

By Rajsekhar Bharali posted 2 hours ago

  

Live KVM migration with IBM FlashSystem providing high-availability and resiliency 

Introduction

In this Blog, we will see how to live-migrate a VM hosted on a Fibre Channel LUN and having access to raw block devices between two KVM hypervisors. We will also try to maintain High Availability on the storage layer by having the hypervisors mapped to IBM FlashSystem Storage Partition in a Policy-Based HA replication.

This approach provides two layers of protection:

  1.  Seamless on-demand migration of the VM’s computer and storage components from one KVM host in Site 1 to another in Site 2.
  2. Continuous replication of the VM’s storage files and devices across both sites within the PBHA environment, ensuring resilience and minimal downtime. This implies that if either of the site fails, we shall have a point in time copy of VM’s storage files available at any time.

KVM Live Migration

================

KVM transforms a Linux system into a bare-metal hypervisor capable of efficiently and securely hosting multiple virtual machines by integrating virtualization directly into the Linux kernel. Libvirt is an open-source toolkit and daemon that provides a management layer for KVM, enabling operations such as starting, stopping, resource allocation, cloning, and migration of virtual machines.

Libvirt and Libvirtd


Libvirt is an open-source toolkit and daemon that acts as a virtualization management layer to manage operations on VMs like starting, stopping, adding resources, cloning, and migration. Libvirtd is a code daemon in KVM that has API libraries used by CLI like virsh to interact with the VM.

Setup Objects


Configuration Setup diagram

Figure 1 Configuration Diagram

A screenshot of a computer

AI-generated content may be incorrect.

Figure 2 Storage Configuration for VMs

Site 1:

  • IBM FlashSystem 5200 named DC
  • Source KVM system hosted on a Lenovo ThinkSystem SR650 machine running the VM

Site 2:

  • IBM FlashSystem 5200 named NDC
  • Target KVM system hosted on a Lenovo ThinkSystem SR650

Site 1 and Site 2:

  • Configured in a Policy-Based High Availability (PBHA) replication relationship

For detailed guidance on setting up PBHA, refer to the following IBM resources:

·       High Availability - IBM Documentation

·       Unleashing the Power of IBM Storage FlashSystem Grid

·       Ensuring Business Continuity with Policy-Based Replication and Policy-Based HA

Host Objects Configuration


Within the IBM FlashSystem Storage Partition Dashboard, the host objects are configured as follows:

  • The Source KVM system’s Fibre Channel WWPNs are added to the host object labeled “Source” with affinity to Site 1.
  • The Target KVM system’s Fibre Channel WWPNs are added to the host object labeled “Target” with affinity to Site 2.

Host Cluster and Storage Volumes


Both host objects are grouped into a Host Cluster within the Storage Partition. For more information on Host Clusters, see Host Clusters - IBM Documentation.

Finally, map the required storage volumes to this Host Cluster. This mapping ensures that both KVM hosts within the cluster have identical access to the shared volumes, a critical requirement for successful live migration.

On the Partition Dashboard view:

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

Migration Steps


a)     Creation of Storage Pool on KVM for VM deployment

  1. On the Source System, we need a storage pool to host the VM.
  2. Identify the device on which we need to create the storage pool from the multipath devices list.

  1. On a shared Fibre Channel LUN, we need to create a volume group.

  1. This will create a volume group vg_vmdisk on the multipath device.

  1. Create a storage pool config file "pool-vmdisks.xml".

  1. Deploy and start the storage pool.

  1. On the Source, the storage pool is listed as active.

  1. As a Part of the migration pre-requisite; the storage needs to be defined on the target too. Identify the devices under the multipath device and run the below CLIs
    1. lvmdevices --adddev /dev/mapper/mpathad
    2. Perform “pvscan” and “vgscan”
    3. CLI “vgs” should display the VGs
  2.  To define the storage pool on the Target system; run the steps 5 and 6 on the Target system.

b)     VM deployment on the Storage pool

  1. Create a VM with no storage from the KVM GUI.

A screenshot of a computer

AI-generated content may be incorrect.

  1. Post “create and edit” add the disk on which you want to create the VM.

A screenshot of a computer

AI-generated content may be incorrect.

A screenshot of a computer

AI-generated content may be incorrect.

c)     Post creation of the VM on the source

  • Verify that the VM is up and running and the required VM files are listed on the source system.

[root@Source ~]#  virsh list

Id Name State

---------------------------

11 Oracle_VM running

[root@Source ~]#  virsh vol-list vmdisks

Name Path

------------------------------------

VM_DISK /dev/vg_vmdisks/VM_DISK

  • Verify that the same VM disks are also seen under the target KVM system.

[root@Target ~]#  virsh vol-list vmdisks

Name Path

------------------------------------

VM_DISK /dev/vg_vmdisks/VM_DISK

d)     Adding more shared storage to the VM

If your VM requires more Storage that needs to be added as raw block devices

  • Verify that the required device is available on both the Source and Target KVM systems.

On the source:

[root@Source ~]# ls /dev/disk/by-id/dm-uuid-mpath-36005076812ef81ab0800000000000034

/dev/disk/by-id/dm-uuid-mpath-36005076812ef81ab0800000000000034

On the target:

root@Target ~]# ls /dev/disk/by-id/dm-uuid-mpath-36005076812ef81ab0800000000000034

/dev/disk/by-id/dm-uuid-mpath-36005076812ef81ab0800000000000034

  •  Use the command below to add the raw block devices

[root@Source ~]#  virsh attach-disk Oracle_VM --source /dev/disk/by-id/dm-uuid-mpath-36005076812ef81ab0800000000000034 --target sda --persistent --driver qemu --subdriver raw --type disk

  •        Verify the added disk in the VM configuration:

A black screen with white text

AI-generated content may be incorrect.

e)     Trigger the VM migration using the "virsh" CLI

  •   On the Source system run the below command to trigger the live migration.

[root@Source ~]#  virsh migrate --live Oracle_VM qemu+ssh://9.63.217.88/system

root@9.63.217.88's password:

The virsh CLI list empty VM list on the Source

  •  Post migration the “virsh list” CLI does not list the migrated VM on the Source

 [root@Source ~]#  virsh list

Id Name State

--------------------

  • Verify that the VM is migrated on the target. The “virsh list” CLI list the migrated VM in the VM list.

[root@Target~]#  virsh list

Id Name State

-----------------------------------

1 rhel9.6-2025-9-26 paused

2 AURO_R9 running

9 Oracle_VM running

0 comments
6 views

Permalink