Primary Storage

 View Only

SANity Check part 2

By Greg Deffenbaugh posted 22 days ago

  

Part 1 of this blog post discussed how Fusion Data Foundation (FDF) can be deployed in either a converged software defined architecture (internal) or in a SAN-like fashion with the data plane deployed on an external IBM Storage Ceph Cluster and the FDF control plane deployed in the Red Hat OpenShift clusters.

NVMe/TCP for VMware

The internal vs external Data Foundation conversation isn’t new. Data Foundation has been supporting both deployment options since OpenShift 4.5. (2020) IBM Storage Ceph 7.1 has changed the conversation with the addition of NVMe/TCP support for VMware (and a vCenter plug-in for managing Ceph resources). Now your VMware 7.0u3 and newer clusters can get storage services directly from supported IBM Ceph.

So, you ask: “Why do I need storage for VMware if I am moving my VMs to OpenShift?” Moving 100’s or 1000’s of VMs to another platform takes time – months, maybe years. In many cases, taking longer than support for the existing SAN environment. Purchasing a new SAN while migrating workloads to OpenShift doesn’t make financial sense. While most SAN vendors provide CSI drivers to support provisioning and data snapshots, there isn’t a VASA or VAAI type upstream project in the works to extend Kubernetes storage features into the SAN infrastructure.

Aside: There is an upstream project called KubeSAN to provide support for carving a single SAN LUN into multiple PVs to address LUN count challenges with Kubernetes, but still a long way from having tight integration between Kubernetes and Storage Arrays.

This is where the new VMware integration with Ceph comes into play. You can have a common storage infrastructure for your VMware and OpenShift workloads that doesn’t compromise your future. Ceph has been deployed to provide file, block, and object storage for VMs running in OpenStack for years. OpenStack never reached the popularity of VMware, but if you made a 4G phone call, transferred money between accounts in different banks, made a payment from your phone, or watched cable television, you likely got the advantages of VMs running on OpenStack and Ceph.

Figure 3: The Journey

Bear with me here, this picture works much better in slide show mode with animation but…

Suppose you have decided (like many have) that you want to get off VMware and you see OpenShift Virtualization as a great way to move forward on the path to cloud-native application deployment. You have a few hundred to a few thousand VMs to move off VMware. This is going to take months to years to complete. (Recently spoke to a customer planning to migrate 4-5000 VMs per year over the next few years.) Now suppose support for your SAN storage and switches expires next year. Buying a new SAN infrastructure doesn’t make good financial sense. 

You can build an IBM Ceph Cluster and deploy NVMe/TCP to your VMware Clusters to run your VMs. The process looks something like this (over simplified):

1.     Migrate your LUNs from your existing SAN into IBM Ceph. This can be done with VMware Storage vMotion, third party tools, or IBM Lab Services can help. There may be some refitting of the network required, but more on that later. When you are done, your environment will look something like this:

Figure 4: Running the VMware and OpenShift from Common Ceph Cluster

2.     The Red Hat Migration Toolkit for Virtualization (MTV) can be used to migrate your VMs from ESXi to OpenShift Virtualization (OSV). Part of the migration involves converting your VMware LUNs into OpenShift Persistent Volumes (PVs), so essentially, you’ll be moving data from a LUN on the Ceph cluster to a PV on the Ceph cluster. When the LUN is deleted, the capacity is returned to the pool to be used for PVs for the next VM migration i.e. capacity allocated in the Ceph cluster for VMware will be used by OpenShift after the VMs are migrated to containers.

a.       It should be noted that MTV works just fine if the LUN still resides in your SAN. If you have VMs ready to be migrated to OSV, you can skip step 1 for those VMs that are going to be moved to OSV while your SAN is still deployed. 

b.       As you drain the VMs from ESXi and are running them on OpenShift, you will be left with VMware servers connected to the Ceph cluster with no workloads. Assuming you have useful life in those servers, you can expel them from the VMware cluster and join them to the OpenShift cluster. For servers with useful life left changing the data network from FC to IP (if necessary) is likely a very good investment.

3.       When you are ready to refactor your applications to run natively in containers, you already have the leading storage infrastructure for OpenShift in place to simplify the journey.

Figure 5: Next Generation Infrastructure                                                  

If you haven’t looked at the economics of refactoring applications, review: Virtual Machines Versus Containers referenced in part 1. There are significant resource savings between running an application in a VM vs running the same functionality with microservices in containers. (operating system resource reduction is a key part). If you size your Ceph cluster to handle your VMware workload and convert a significant portion of your applications to microservices, you will have space to add additional workloads, such as data lakehouse, S3 as a service, more OpenShift Clusters, etc.  

Ceph is coming up on the 18th anniversary of Sage Weil’s USENIX Paper on Ceph (20th anniversary since Ceph’s inception). Since then, it has been used for many demanding workloads. The addition of support for NVMe/TCP and the vCenter plug-in brings IBM Ceph into another set of demanding workloads, just in time to provide the infrastructure to realize the economic gains of getting off VMware.

For more information on IBM Ceph Support for VMware, refer to these blog posts.

  1. IBM Storage Ceph 7.1 is VMware Certified! 
  2. IBM Storage Ceph 7.1 - A new milestone, offering block storage through NVMe/TCP for non-Linux clients 
  3. IBM Storage Ceph 7.1 Adds Support for NVMe over TCP, Bringing Software-Defined Block Storage to VMware Environments 
  4. IBM Ceph Storage Virtualize Plugin Integration for vSphere Environment to Manage the Ceph Block RBD Device 
  5. Managing IBM Ceph Storage systems in the vSphere plugin 1.0.0 dashboard 
  6. Troubleshooting steps for vSphere Plugin to collecting a snap for support engagement 
  7. Unleash Speed: Configuring NVMe-oF Initiator for Blazing-Fast Storage on VMware ESXi 
  8. Configuring the IBM Storage Ceph 7.1 with NVMe-oF initiator for VMware ESXi
  9. Securing IBM Storage Ceph 7.1z1 Cluster of NVMe-oF Service with Mutual TLS (mTLS)

#Highlights
#Highlights-home
0 comments
14 views

Permalink