Infrastructure as a Service

Infrastructure as a Service

Join us to learn more from a community of collaborative experts and IBM Cloud product users to share advice and best practices with peers and stay up to date regarding product enhancements, regional user group meetings, webinars, how-to blogs, and other helpful materials.

 View Only

OpenShift Virtualization on IBM Cloud ROKS: a VMware administrator’s guide to migrating VMware VMs to OpenShift Virtualization

By Neil Taylor posted 15 hours ago

  

The hybrid cloud landscape continues to evolve, and IBM Cloud Red Hat OpenShift Kubernetes Service (ROKS) has taken a significant step forward with the recent availability of OpenShift Virtualization. This development marks a pivotal moment for enterprises looking to modernize their infrastructure while maintaining their existing virtual machine investments.

Red Hat OpenShift on IBM Cloud ROKS now supports the OpenShift Virtualization operator so now you can run your Virtual machine (VM) workloads on ROKS. See Installing the OpenShift Virtualization Operator on Red Hat OpenShift on IBM Cloud clusters

ROKS on VPC provides a managed Kubernetes platform with integrated Red Hat OpenShift tooling. VPC-based clusters offer enhanced network isolation, multi-zone high availability, scalable infrastructure, and secure workload environments. This makes VPC an ideal foundation for running OpenShift Virtualization (Virt), which enables VM workloads alongside containers.

This is the third blog in a series of four blogs where we cover the following from a VMware administrator's perspective:

What is OpenShift Virtualization on ROKS?

OpenShift Virtualization is a Kubernetes-native virtualization platform that allows organizations to run both containerized applications and virtual machines on a single, unified platform. On IBM Cloud ROKS, this capability extends the power of Red Hat's fully managed OpenShift service to include comprehensive VM management alongside traditional container orchestration.

The integration brings together the enterprise-grade security and scale of IBM Cloud with Red Hat's proven container platform, creating an environment where traditional VM workloads can coexist seamlessly with cloud-native applications.

OpenShift Virtualization on IBM Cloud ROKS supports both new and existing VM workloads, providing features such as:

  • Live migration of VMs across cluster nodes for maintenance and load balancing
  • High availability configurations for mission-critical workloads
  • Dynamic provisioning of storage resources
  • Network integration with OpenShift's software-defined networking
  • Backup and disaster recovery aligned with cloud-native practices

Migrating VMware VMs to OpenShift Virtualization

OpenShift Virtualization allows you to run and manage virtual machine workloads alongside container workloads within an OpenShift cluster. Migrating existing VMs from VMware vSphere to OpenShift Virtualization can be facilitated by the Migration Toolkit for Virtualization (MTV). MTV is included with your Red Hat OpenShift subscription and available in the OpenShift OperatorHub.

The migration process typically involves the following steps:

1.    Preparation: Ensure you have the necessary prerequisites. For VMware migrations, this includes specific VMware privileges, having VMware Tools installed on the source VMs, and unmounting any ISO/CDROM disks. You might also need to obtain the SHA-1 fingerprint of the vCenter host. Specific network ports must be open to allow traffic between OpenShift nodes, VMware vCenter, and VMware ESXi hosts for inventory collection and disk transfer.

2.    Install MTV Operator: Install the Migration Toolkit for Virtualization (MTV) Operator on your OpenShift cluster. This can be done through the OpenShift web console or the command line interface. The MTV user interface is integrated into the OpenShift web console.

3.    Add Source Provider: In the OpenShift console, you will typically add VMware as a source provider, specifying details like the vSphere endpoint.

4.    Create a Migration Plan: You create a migration plan within the OpenShift console. A migration plan can include migrating multiple VMs.

5.    Map Networks and Storage: As part of the migration plan, you define mappings between the source VMware networks and the target OpenShift Virtualization networks. VMs can be connected to the OpenShift cluster's pod network or potentially other networks defined in OpenShift. You also map the source VMware datastores to target OpenShift storage classes.

6.    Select VMs: Choose the specific VMs you want to migrate (your web, app, and db VMs for example) from the source provider inventory.

7.    Choose Migration Type: Select the migration type. Cold migration involves shutting down the source VMs while data is copied. Warm migration allows most data to be copied while the source VMs are running, which requires changed block tracking (CBT) enabled on the VMs and their disks.

8.    Start Migration: Once the plan is configured, you start the migration. The Migration Controller service manages the process, creating a VirtualMachineImport custom resource for each source VM. Data volumes (Persistent Volume Claims) are created for the VM disks.

9.    Post-Migration: After the migration finishes, your VMs (web, app, db) will be running on Red Hat OpenShift and can be managed like any other workload. You can monitor their status and details through the OpenShift console. There may be post-migration configuration tasks on the VMs to enable them to work correctly in the new environment, especially around networking. See OpenShift Virtualization on IBM Cloud ROKS: a VMware administrator’s guide to networking. Also see Using migration hooks in migration toolkit for virtualization which discusses how to automate tasks immediately before or after the VM is migrated. This type of automation is made possible through the migration toolkit for virtualization’s migration hooks.

A deeper dive into Migration Toolkit for Virtualization

In OpenShift Virtualization, the `forklift-controller` is a component of the OpenShift Migration Toolkit for Virtualization (MTV). This toolset is specifically designed to migrate virtual machines (VMs) from traditional virtualization platforms like VMware vSphere, Red Hat Virtualization (RHV), and OpenStack, into OpenShift Virtualization, which runs on top of OpenShift/Kubernetes.

The forklift-controller is the central control-plane component in the MTV architecture. It acts as the orchestrator and coordinator for the VM migration process. It runs as a Kubernetes controller inside the OpenShift cluster and manages custom resources (CRs) that represent source providers, mappings, plans, and migrations.

It is implemented as a Kubernetes controller that watches specific Custom Resource Definitions (CRDs) and reacts to changes by performing the necessary actions to move VMs from the source to OpenShift Virtualization.

Its core responsibilities are as follows:

  • Provider Discovery and Inventory Collection - It connects to source platforms like VMware vSphere or RHV using credentials defined in `Provider` CRs. It collects metadata such as:
    • List of VMs
    • VM configurations (CPU, memory, disks, NICs)
    • Resource pools
    • Storage domains
    • Networks
  • Network and Storage Mapping - Watches NetworkMap and StorageMap CRs and validates mappings between source and target environments.
  • Plan Management – It manages Plan CRs that define which VMs are to be migrated and how. It validates configurations and ensures that everything is mapped correctly, as well as providing a dry-run or pre-migration validation.
  • Orchestrating Migration – It watches Migration CRs which trigger actual VM moves and coordinates the following steps:
    • Exporting the VM from the source, often using tools like virt-v2v.
    • Creating a temporary VM representation in OpenShift.
    • Importing and converting the VM image and config.
    • Launching the new VM as a KubeVirt-based VM (VirtualMachine CR).
  • Status Tracking and Reporting – It monitors migration progress, updates status in the relevant CRs. And raises alerts/errors if migration fails due to network, storage, or resource issues.

The forklift-controller is written in Go, following the Kubernetes controller-runtime framework. It is deployed as a container inside OpenShift (usually in the openshift-mtv or forklift namespace) and interacts heavily with:

  • Kubernetes API.
  • Source platform APIs (vSphere, RHV, OpenStack).
  • KubeVirt APIs.

Key CRDs handled by forklift-controller include: 

  • Provider - Describes the source virtualization environment.
  • NetworkMap - Maps source networks to OpenShift networks.
  • StorageMap - Maps source datastores/disks to OpenShift PVCs/storage classes.
  • Plan - Describes which VMs to migrate and their configurations.
  • Migration - A job that runs a plan – starts a migration for one or more VMs.

A simplified migration flow is as follows:

1.    Admin defines Provider (e.g., vSphere connection info).

2.    forklift-controller collects inventory from the source provider.

3.    Admin defines network and storage mappings.

4.    Admin creates a Plan choosing VMs to migrate.

5.    Admin initiates Migration from the Plan.

6.    forklift-controller:

a.    Co-ordinates image conversion.

b.    Deploys KubeVirt VMs.

c.    Connects networks and storage.

7.    VMs are live in OpenShift Virtualization.

MTV:

  • Supports warm migrations - VM runs at source while data is copied gradually.
  • Supports cold migrations - VM is shut down before migration.
  • Provides UI integration via OpenShift Web Console for guided workflows.
  • Compatible with `virt-v2v` for disk image format conversion (from VMDK/QCOW2 to KubeVirt-compatible formats).

Warm migration

Warm migration is one of the more powerful features MTV offers, allowing you to migrate running VMs from a source platform like vSphere into OpenShift Virtualization with minimal downtime. Warm migration (a.k.a. live or pre-copy migration) allows you to:

  • Start copying VM data while the source VM is still running.
  • Copy incrementally over multiple rounds (dirty blocks only).
  • Minimize cutover time (downtime), only shutting down the VM for final sync.

This contrasts with cold migration, where the source VM is shut down before data transfer starts, resulting in longer downtime. Pre-conditions for warm migration include the following:

  • VM must be on vSphere (currently, warm migration is primarily supported for vSphere).
  • Migration Plan must use warm strategy.
  • The forklift-controller is running.
  • The virt-v2v image conversion tool is accessible.
  • A Plan and Migration CR are defined for the VM(s) being moved.

Initial Sync (Full Copy)

The forklift-controller starts a conversion pod for each VM. The conversion pod uses virt-v2v (or vddk plugin for vSphere) to:

  • Connect to the source VM’s disk (via VDDK).
  • Copy all blocks to the destination PVC in OpenShift.

This happens while the VM is still running. This is like a full back-up or snapshot copy.

Change Block Tracking (CBT)

After the full copy, the controller enables CBT on the source VM (only if the source platform supports it, vSphere does). The controller waits for a period (configurable) to track dirty blocks (disk changes) while the VM continues running. At the end of the interval, another incremental copy is triggered. Only blocks that changed since the last copy are transferred. This cycle repeats for several iterations. Internally, it tracks this through a PreCopy phase in the Migration CR.

Cutover (Final Sync and Shutdown)

At some point (manual or scheduled), the cutover is triggered. The VM on the source is:

  • Gracefully shut down
  • Final block delta is copied (last dirty blocks)
  • The destination VM is started in OpenShift as a KubeVirt VM. This is the only period of downtime, and it’s typically a few minutes.

Key components involved are:

  • forklift-controller - Orchestrates the warm migration state machine.  
  • virt-v2v-conversion-pod - Pod that performs the disk conversion using virt-v2v.
  • VDDK plugin - VMware Disk Development Kit used for reading VM disks efficiently.
  • Source VM with CBT - Change Block Tracking needs to be enabled for warm migration.

You can tune the warm migration behaviour by editing the Migration or Plan CR:

  • warm - True enables warm migration.
  • cutover - Defines when the final sync should happen (immediate, scheduled, manual).
  • interval - Time between pre-copy iterations.
  • maxIterations - Number of incremental sync cycles before 

Limitations include the following:

  • Currently best supported with vSphere.
  • CBT must be enabled and functional, issues with CBT lead to full re-copy.
  • High churn rate on source VMs can increase final sync time.
  • Not suitable for stateful apps unless consistent snapshots are used.

You can initiate a warm migration and manually trigger cutover when you're ready (e.g., during a low-traffic maintenance window).

Summary

The Migration Toolkit for Virtualization is a built-in feature of Red Hat OpenShift that enables organizations to migrate virtual machines from VMware vSphere to OpenShift Virtualization. It leverages KubeVirt to run VMs alongside containers in a Kubernetes-native environment, offering a unified platform for hybrid workloads.

  • Pros:
    • Seamless Integration: MTV integrates directly with OpenShift and supports both cold and warm migrations, allowing flexibility depending on downtime tolerance.
    • Unified Management: Post-migration, VMs are managed alongside containers via the OpenShift console, simplifying operations and aligning with DevOps practices.
    • Modernization Path: MTV enables gradual modernization by allowing legacy VMs to coexist with cloud-native applications.
  • Cons:
    • Learning Curve: Teams familiar with VMware may face a steep learning curve adapting to Kubernetes and OpenShift paradigms.
    • Ecosystem Maturity: While growing rapidly, OpenShift’s virtualization ecosystem is not yet as mature or extensive as VMware’s.
    • Performance Tuning Required: Migration speed and performance depend heavily on network/storage bandwidth and proper configuration.
0 comments
1 view

Permalink