The hybrid cloud landscape continues to evolve, and IBM Cloud Red Hat OpenShift Kubernetes Service (ROKS) has taken a significant step forward with the recent availability of OpenShift Virtualization. This development marks a pivotal moment for enterprises looking to modernize their infrastructure while maintaining their existing virtual machine investments.
Red Hat OpenShift on IBM Cloud ROKS now supports the OpenShift Virtualization operator so now you can run your Virtual machine (VM) workloads on ROKS. See Installing the OpenShift Virtualization Operator on Red Hat OpenShift on IBM Cloud clusters
ROKS on VPC provides a managed Kubernetes platform with integrated Red Hat OpenShift tooling. VPC-based clusters offer enhanced network isolation, multi-zone high availability, scalable infrastructure, and secure workload environments. This makes VPC an ideal foundation for running OpenShift Virtualization (Virt), which enables VM workloads alongside containers.
This is the first blog in a series of four blogs where we cover the following from a VMware administrator's perspective:
- Storage
- Networking (this blog)
- Migration (coming soon)
- Advanced Networking (coming soon)
What is OpenShift Virtualization on ROKS?
OpenShift Virtualization is a Kubernetes-native virtualization platform that allows organizations to run both containerized applications and virtual machines on a single, unified platform. On IBM Cloud ROKS, this capability extends the power of Red Hat's fully managed OpenShift service to include comprehensive VM management alongside traditional container orchestration.
The integration brings together the enterprise-grade security and scale of IBM Cloud with Red Hat's proven container platform, creating an environment where traditional VM workloads can coexist seamlessly with cloud-native applications.
OpenShift Virtualization on IBM Cloud ROKS supports both new and existing VM workloads, providing features such as:
- Live migration of VMs across cluster nodes for maintenance and load balancing
- High availability configurations for mission-critical workloads
- Dynamic provisioning of storage resources
- Network integration with OpenShift's software-defined networking
- Backup and disaster recovery aligned with cloud-native practices
OpenShift Networking Fundamentals
As a VMware administrator venturing into OpenShift Virtualization, understanding the networking architecture is crucial for successful VM deployments and migrating VMs from VMware to OpenShift Virtualization.
This blog explores how networking operates in ROKS OpenShift Virtualization and describes the most important concepts to grasp, particularly regarding masquerade networking and the pod network.
Before diving into VM-specific networking, let's establish a baseline understanding of OpenShift's three key network types:
- Machine Network: For the actual OpenShift nodes.
- Pod Network: For container workloads (including virtual machines).
- Service Network: For Kubernetes services.
Machine Network
The machine network in ROKS refers to the network used by the actual OpenShift worker nodes.
- Subnet range: Uses IP range of the VPC subnets in your IBM Cloud account. You define the IP subnet range.
- Implementation: Deployed into IBM Cloud VPC subnets
- Allocation: Each zone in a multi-zone ROKS cluster gets its own subnet
- Example:
- Zone 1: 10.240.0.0/24
- Zone 2: 10.240.64.0/24
- Zone 3: 10.240.128.0/24
Each worker node receives an IP address from the VPC subnet in its respective zone. This network is used for:
- Node-to-node communication
- Pod network traffic between nodes via Calico L3 routing and tunnelling.
- Ingress and egress traffic to the containers and Machines.
Pod Network
The pod network is where all pods (and virtual machines) in the cluster receive their IP addresses.
- Subnet range: 172.17.0.0/16.
- Implementation: Managed by the Calico CNI plugin.
- Allocation: The pod CIDR is further divided into node-specific segments.
- Example:
- Worker-node-1: 172.17.0.0/24
- Worker-node-2: 172.17.1.0/24
Each pod on a node gets assigned an IP from that node's allocated pod subnet. This allows for efficient routing between pods across the cluster. The pod network is the primary network that connects all pods across all nodes in the cluster. In OpenShift:
- Each pod receives a unique IP address from the designated CIDR range.
- Pods can communicate with each other directly using these IPs regardless of the node they're running on.
- The Container Network Interface (CNI) plugin handles pod-to-pod traffic routing.
- This network provides the foundation for all containerized workloads, including VMs, which are “hosted” in a container.
If you specified your own pod subnet during cluster creation, your pods are assigned IP addresses from this range. If you did not specify a custom pod subnet, for the first cluster you create, the default pod subnet will be 172.17.0.0/18.
Service Network
The service network provides stable virtual IPs for Kubernetes services.
- Subnet range: 172.21.0.0/16
- Implementation: Implemented by kube-proxy
- Usage: Service IPs are cluster-wide virtual IPs that don't correspond to actual network interfaces
Service IPs are used as stable endpoints to access pods, regardless of pod restarts or relocations. They act as an abstraction layer between clients and the actual pods providing a service:
- Services provide load balancing and a single entry point to access pods, regardless of pod lifecycle or IP changes
- For VMs, services can be crucial for providing stable access points to VM workloads.
OpenShift Virtualization Masquerade Networking
While OpenShift Virtualization offers several networking models for VMs, currently with ROKS only masquerade is available.
Masquerade is the default and most flexible networking model in OpenShift Virtualization. It's conceptually similar to NAT in traditional virtualization.
Masquerade works as follows:
- Each VM runs inside a pod, the virt-launcher pod.
- The VM receives a private IP address that's only visible within that pod, 10.0.2.2.
- The pod itself has an IP from the pod network.
- Traffic from the VM is NAT'd through the pod's IP address.
- Return traffic is directed back to the pod and then to the VM.

Benefits of Masquerade:
- VMs can communicate with any pod or service in the cluster.
- Works without additional network configuration.
- Provides network isolation between VMs.
- Functions across any underlying infrastructure.
- Simplifies VM migration between nodes.
Drawbacks of Masquerade:
- VMs are not directly accessible from outside the cluster without additional services.
- Performance overhead due to NAT.
- Limited to TCP and UDP protocols (certain protocols like ICMP may behave unexpectedly).
Configuring Masquerade
In a VM definition, masquerade is configured as follows:
apiVersion: kubevirt.io/v1
kind: VirtualMachine
metadata:
name: example-vm
spec:
template:
spec:
domain:
devices:
interfaces:
- name: default
masquerade: {}
networks:
- name: default
pod: {}
With masquerade networking, VM egress is relatively straightforward. VMs can access:
- Other pods in the cluster via their pod IPs.
- Cluster services via their service IPs.
- External networks via the node's networking (going through NAT).
To control where VMs can send traffic, Network Policies can be used:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: allow-http-and-https
spec:
podSelector:
matchLabels:
role: frontend
ingress:
- ports:
- protocol: TCP
port: 80
- protocol: TCP
port: 443
Key Implications of Masquerade
- External Access to VMs:
- VMware VMs are often directly accessible by IP.
- OpenShift VMs behind masquerade are NOT directly accessible by default.
- You must explicitly create Services and Routes for external access.
- VM-to-VM Communication:
- In VMware, VMs on the same network segment communicate directly using the VM IP address.
- In OpenShift, VMs need to communicate via their pod IPs or services IP, NOT their VM IP address.
- DNS Configuration:
- OpenShift provides DNS for services but the VM's private DNS within the masquerade network won't resolve cluster resources without additional configuration
- IP Persistence:
- VMware VMs often keep the same IP through reboots.
- OpenShift pod IPs can change when pods restart, affecting the external identity of your VM.
Understanding OpenShift DNS for VMware Administrators
In OpenShift, name resolution for pods, and VMs, is handled through internal DNS provided by the Cluster DNS Operator. OpenShift deploys a DNS service that runs as a set of pods. This service is responsible for resolving names within the cluster.
Each pod, or VM, gets an /etc/resolv.conf
file configured by kubelet. It points to the cluster DNS service usually at 172.21.0.10, depending on your cluster setup.
You can resolve services by name, e.g., my-service.my-namespace.svc.cluster.local
. Direct pod-to-pod DNS resolution is not supported by default as communication is usually done via services. The pod’s resolv.conf
includes search domains like; search my-namespace.svc.cluster.local svc.cluster.local cluster.local. You can customize DNS settings for pods using the dnsConfig field in the pod spec.
The following is from a RHEL VM deployed on the pod network.
cat /etc/resolv.conf
# Generated by NetworkManager
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 172.21.0.10
To add a conditional name server to the cluster, edit the DNS operator using: oc edit dns.operator/default or via the UI: Administration, Cluster Settings, Configuration, DNS, YAML and under spec, add:
spec:
servers:
- forwardPlugin:
upstreams:
- 10.190.223.212
name: bu-demo-test
zones:
- bu.demo.test
You can:
- Define multiple server blocks for different domains.
- List multiple IPs in the forwardPlugin.upstreams list for redundancy.
Understanding OpenShift Routes and Services for VMware Administrators
As a VMware administrator, you're familiar with networking concepts like virtual switches, port groups, and NSX load balancers. When transitioning to OpenShift, you'll encounter two critical networking constructs: Services and Routes.
OpenShift Services
OpenShift Services provide:
- Stable Network Identity: Similar to how you might use DNS names or static IPs for VMware VMs, Services provide a stable network endpoint regardless of where the actual workloads run.
- Internal Load Balancing: Like distributing traffic across a VMware DRS cluster, Services distribute traffic to multiple identical pods (or VMs in OpenShift Virtualization).
- Port Mapping: Similar to VMware NSX load balancer port rules, Services define which ports are exposed and how they map to container/VM ports.
You can define a service with one of the following service types:
- ClusterIP: Accessible only from within the cluster.
- NodePort: Exposes the Service on every node's IP on a specific port.
- LoadBalancer: Creates an IBM Cloud VPC load balancer
Here is an example of a service for a VM in OpenShift Virtualization:
apiVersion: v1
kind: Service
metadata:
name: my-vm-service
spec:
selector:
kubevirt.io/domain: my-windows-vm # This targets a specific VM
ports:
- name: rdp
protocol: TCP
port: 22000 # Port on the Service IP
targetPort: 3389 # Port on the VM
type: ClusterIP # Only accessible within the cluster
It is important to understand that all service types create a clusterIP that is accessible from inside the cluster.
OpenShift Routes
In OpenShift, routes are a mechanism that allows services running inside the cluster to be accessed from outside the cluster. They act as a load balancer and reverse proxy, directing traffic to the appropriate pod within the OpenShift cluster. The pod could contain a VM. Essentially, a route exposes a service to the internet by associating it with a hostname and optional security settings.
Routes can be configured to distribute traffic across multiple pods running the same service, ensuring high availability and performance. Routes support both secured (HTTPS) and unsecured (HTTP) traffic, allowing for the configuration of TLS certificates and other security settings to protect sensitive data. See Exposing apps with routes in Red Hat OpenShift 4. Routes works as follows:

Example:
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: my-vm-website
spec:
host: vm-website.apps.my-cluster.example.com
to:
kind: Service
name: my-vm-service
port:
targetPort: web
tls:
termination: edge
insecureEdgeTerminationPolicy: Redirect
Common Use Cases with VMs
As a VMware administrator, this is how you can expose your VMs in OpenShift:
Exposing VM Web Applications:
- Create a Service targeting the VM pod.
- Create a Route pointing to that Service.
- Access via the Route hostname.
The most common method involves creating a Service to expose the VM, then a Route to make it accessible outside the cluster:
apiVersion: v1
kind: Service
metadata:
name: vm-web-service
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
vm.kubevirt.io/name: example-vm-web
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: vm-web-route
spec:
to:
kind: Service
name: vm-web-service
port:
targetPort: 80
This exposes the VM's HTTP port via an OpenShift route, making it accessible through the cluster's router (HAProxy).
Exposing VM Non-HTTP Services (like databases):
- Create a Service targeting the VM pod.
- Use type LoadBalancer or NodePort for external access.
- No Route required (Routes primarily handle HTTP/HTTPS).
The following exposes the Service using a LoadBalancer. In ROKS an Application Load Balancer (ALB) is automatically instantiated and configured:
apiVersion: v1
kind: Service
metadata:
name: vm-database-lb-service
spec:
type: LoadBalancer
ports:
- port: 1430
targetPort: 1430
protocol: TCP
selector:
vm.kubevirt.io/name: example-vm-database
For simpler setups, a NodePort service exposes the VM on a specific port on all cluster worker nodes:
apiVersion: v1
kind: Service
metadata:
name: vm-database-nodeport-service
spec:
type: NodePort
ports:
- port: 1430
targetPort: 1430
nodePort: 30080
protocol: TCP
selector:
vm.kubevirt.io/name: example-vm-database
Note that with the current “Secure by default” implementation, ROKS will enforce security group rules which does not allow NodePort services to be contacted from outside the cluster. E.g. any workload (VPN, or VSIs) in the same VPC will not be able to consume NodePort exposed services without manual changes.
Internal-Only VM Services:
Use ClusterIP Service type, which is similar to having VMs on an internal-only network in VMware.
Calico Implementation in ROKS
Typically, OVN-Kubernetes is the CNI for OpenShift, however, for ROKS, Calico serves as the primary CNI plugin in ROKS clusters and is responsible for:
- Pod-to-pod networking: Managing how containers communicate with each other.
- Network policy enforcement: Implementing Kubernetes NetworkPolicy resources.
- IP address management (IPAM): Allocating and tracking IP addresses for pods.
With Calico:
- Calico implements a Layer 3 approach to networking, rather than overlay networking.
- Each node is allocated a smaller subnet from the pod CIDR.
- Calico programs routes on each node to direct traffic to the appropriate destination.
Calico in OpenShift replaces the default OpenShift SDN with a pure Layer 3 networking approach:
- Instead of overlay networks with encapsulation, Calico uses direct IP routing between hosts.
- Calico uses Border Gateway Protocol (BGP) to exchange routing information between nodes, enabling each node to know how to reach pods on other nodes.
- Unlike OpenShift SDN or OVN-Kubernetes which use VXLAN overlays, Calico IP in IP is used.
- Calico handles IP address management, assigning addresses to pods from configured IP pools.
The diagram below shows the how the networks are used:

The cali* interfaces are virtual Ethernet (veth) pair devices that Calico creates to connect pods/VMs to the host network. Each cali* interface you see on the host is one end of a veth pair, with the other end placed inside the pod/VM network namespace. Key characteristics of these interfaces include:
- Format: cali + random alphanumeric string (e.g., caliab12c456).
- Each interface name is unique across the node.
- Each cali* interface corresponds to exactly one pod or VM.
- Each pod/VM has a corresponding cali* interface on the host.
- Provides layer 3 connectivity between host and pod/VM.
- Enables Calico's iptables rules to process traffic to/from specific pods.
Ingress and Egress
As shown in the diagram below:
- For external ingress to a VM, ROKS uses an IBM Cloud Load Balancer with a FQDN that resolves to public IPs.
- For egress traffic from a VM, egress traffic flows through the worker nodes' network interface which performs a source NAT using the IP address of the worker node.

Ingress Example
For a VM in OpenShift Virtualization with Calico, the traffic flow is shown below for an Internet resource reaching the VM:
Internet → load-balancer → NodePort → ClusterIP(Service IP) → cali* interface on host → eth0 inside container → virt-launcher container → VM's virtual NIC → VM
Egress example
For a VM in OpenShift Virtualization with Calico, the traffic flow is shown below for a VM reaching the Internet:
VM → VM's virtual NIC → virt-launcher container → eth0 inside container → cali* interface on host → host routing tables → physical NIC → public gateway → Internet
For a VM to VM in OpenShift Virtualization with Calico with both VMs on the same worker node, the traffic flow is shown below:
VM1 → VM1's virtual NIC → VM1’s virt-launcher container → eth0 inside VM1 container → VM1 cali* interface on host → host routing tables → VM2 cali* interface on host → eth0 inside VM2 container → virt-launcher VM2 container → VM2's virtual NIC → VM2
Calico applies network policies using iptables rules that match on these cali* interfaces. Each interface gets specific rules applied based on the policies targeting that pod/VM.
VPC Network Integration
ROKS worker nodes are deployed onto IBM Cloud VPC networks in your IBM Cloud account:
- VPC Security Groups control traffic to/from worker nodes.
- VPC Routing Tables define how traffic flows between subnets.
- VPC Load Balancers provide ingress to the cluster.
- VPC Network ACLs can be used for additional traffic filtering.
Troubleshooting VM Networking Issues
Common issues VMware administrators might encounter in OpenShift Virtualization:
1. VM can't reach external networks:
- Check masquerade is configured correctly.
- Verify no NetworkPolicy is blocking traffic.
- Ensure the node has external connectivity.
- Check the VM's default gateway configuration.
2. External services can't reach the VM:
- Masquerade VMs are not directly accessible by default.
- Verify service and route/ingress configurations.
- Check selectors match the VM's labels.
3. VM-to-VM communication issues:
- For masquerade networks, ensure both VMs are targeting service IPs or pod IPs correctly and not the IP address of the VM.
- Check for any NetworkPolicy objects that might be restricting traffic.
- Verify the VMs are in the same namespace or that cross-namespace communication is allowed.
Key Differences from VMware Networking
Here are some key differences between VMware networking and OpenShift networking:
- Declaration vs. Configuration:
- VMware: You typically configure networking through UI or automation tools
- OpenShift: You declare networking via YAML definitions
- Default Network Isolation:
- VMware: VMs are typically accessible by default based on VLAN/segment. Networks might span all clusters,
- OpenShift: Pods/VMs are isolated by default and need explicit Services/Routes for access. Pod networks are cluster-specific.
- Layer 7 vs. Layer 2-4:
- VMware: Often focuses on L2-L4 networking (unless using NSX Advanced LB)
- OpenShift: Routes are primarily L7 constructs designed for HTTP/HTTPS traffic
- Load Balancer Management:
- VMware: Manual LB configuration or NSX automation
- OpenShift: Automatic as part of the platform (HAProxy-based)
- VM Network Location:
- VMware: VMs connect directly to portgroups on vSwitches
- OpenShift: VMs run inside pods, making the pod network your primary connectivity layer.
- IP Address Management:
- VMware: VMs get IPs from your physical network or VMware-managed DHCP
- OpenShift: In masquerade mode, VMs get private IPs that aren't accessible outside their pod
- Resource Control:
- VMware: Uses folders and resource pools.
- OpenShift: Projects/namespaces provide isolation but with different semantics than VMware folders.
- Network Translation:
- VMware: VMs typically have direct network access without NAT (unless specifically configured)
- OpenShift: Masquerade mode applies NAT between the VM and the pod network.
Benefits Over Traditional VMware Networking
Here are some benefits of OpenShift networking over traditional VMware networking:
- Automated Load Balancing: No need to manually configure load balancers
- Declarative Configuration: Networking defined as code, easily versioned
- Built-in Service Discovery: DNS-based discovery for all services
- Application-Centric: Network follows application, not infrastructure
- Simplified Scaling: Automatically load balances as VMs/pods scale
Understanding these concepts will help you translate your VMware networking knowledge to the OpenShift world and effectively plan how your virtualized workloads will communicate both internally and externally.
Practical Migration Strategies
Here are some practical approaches to handle networking during migration from VMware to OpenShift Virtualization on ROKS:
- Preservation of VM IP Addressing:
- You will not be able to retain your VM’s existing IP address when migrating to OpenShift’s masquerade networking model.
- DNS Integration:
- Configure VMs to use the pod's DNS resolver.
- Use OpenShift services for stable endpoints rather than direct pod IPs.
- Network Services Strategy:
- Replace VMware load balancers with OpenShift Services/Routes or Services/LoadBalancers.
- Replace VMware NSX firewall rules with NetworkPolicy resources.
- Connectivity Testing:
- Validate all application connectivity paths before and after migration.
- Test both internal, pod-to-pod, and external connectivity.
Summary
For VMware administrators, OpenShift Virtualization's networking model may initially seem complex, but it offers some flexibility and security advantages. Masquerade networking provides a good balance of isolation, connectivity, and ease of configuration, but maybe restrictive for many current VM workloads.
The most important concept to understand is that ROKS OpenShift Virtualization fundamentally changes how networking reaches your VMs. With masquerade mode, your VMs exist in a private network inside each pod, with NAT handling all external communications. This improves isolation and security but requires a different approach to VM connectivity than VMware's direct network attachment model.
Success in migration requires planning for these differences, particularly around external access to VM services, internal VM-to-VM communication, and DNS resolution. By understanding how masquerade networking and the pod network interact, you can design an effective migration strategy that preserves your application's networking requirements while taking advantage of OpenShift's container-centric architecture.
Masquerade networking uses NAT and not all VM workloads support NAT, for example Microsoft does not officially support Active Directory over NAT. Some agent-based software does not operate over NAT as the agent communicates the VM’s actual IP.
While OpenShift Virtualization supports Multus to enable multiples network interfaces on a VM, with ROKS support of Calico, you can't configure VMs with more than one network interface.
For most VMware administrators migrating to ROKS, the LoadBalancer service approach provides the most familiar experience, giving your VMs dedicated IPs and standard ports. However, NodePort services work universally and are simpler to set up if you don't need standard port numbers.
By understanding the pod, service, and machine networks and how VMs interact with them through masquerade networking, you can successfully deploy and manage virtualized workloads in OpenShift while ensuring proper ingress and egress connectivity.