Infrastructure as a Service

Infrastructure as a Service

Join us to learn more from a community of collaborative experts and IBM Cloud product users to share advice and best practices with peers and stay up to date regarding product enhancements, regional user group meetings, webinars, how-to blogs, and other helpful materials.

 View Only

Leveraging Open Source KVM Hypervisor on IBM Cloud - Proxmox VE

By Sharad Chandra posted Thu April 14, 2022 04:21 AM

  
Introduction
Customers intend to create a low cost DEV,TEST, UAT and PRE-PROD environment on their Infrastructure(s) and here I am  specifically talking about cloud where customers has to pay for hardware and software subscription. This also holds true while setting up Proof-of-Concept (POC) environments. When it comes to creating multiple virtual machines one can either go for a service provider managed hypervisor or a self managed  Hypervisor. For example - VMWare, OpenStack, Hyper-V etc. When it comes to customer's Day-2 operations, the main issue of concern is support.

IBM Cloud comes with multiple flavors of Bare metal servers which helps customers provide options for deployment. Bare metal servers do not come with a hypervisor preinstalled. This environment gives the user complete control over their server infrastructure. Because users get complete control over the physical machine with a bare metal (or dedicated) server, they have the flexibility to choose their own operating system. A bare metal server helps avoid the noisy neighbor challenges of shared infrastructure and allows users to finely tune hardware and software for specific data-intensive workloads.

The primary benefits of bare metal servers on IBM Cloud are based on the access that end users have to hardware resources. The advantages of this approach include the following:

  1. Enhanced physical isolation which offers security and regulatory benefits
  2. Greater processing power
  3. Complete control of their software stack
  4. More consistent disk and network I/O performance
  5. Greater quality of service (QoS) by eliminating the noisy neighbor phenomenon
  6. Imaging capability for creating a seamless experience when moving and scaling workloads

Hypervisors and can be deployed on IBM Cloud bare metal servers through PXE boot or IPMI. For details one can refer link below: 

https://cloud.ibm.com/docs/bare-metal?topic=bare-metal-bm-mount-iso

Proxmox VE

Proxmox VE
is a complete, open-source server management platform for enterprise virtualization. It tightly integrates the KVM hypervisor and Linux Containers (LXC), software-defined storage and networking functionality, on a single platform. With the integrated web-based user interface you can manage VMs and containers, high availability for clusters, or the integrated disaster recovery tools with ease.

Proxmox comes as an iso and can be downloaded from below link:

https://www.proxmox.com/en/downloads/category/iso-images-pve


                                                                                                         Fig1: Proxmox Download

Once the Installation starts, below screen appears and one can proceed with Installation wizard. 

                                                                                                                Fig2: Proxmox Installation 

From high availability perspective Proxmox is a cluster-based hypervisor. It is meant to be used with several server nodes. By using multiple nodes in a cluster, we provide redundancy or High Availability to the platform while increasing uptime. A production virtual environment may have several dozen to several hundreds of nodes in a cluster. As an administrator, it may not be a realistic scenario to change configuration files in the cluster one node at a time. Depending on the number of nodes in the cluster, it may take several hours just to change one small argument in a configuration files of all the nodes. To save precious time, Proxmox implemented the clustered filesystem to keep all the configuration files or any other common files shared by all the nodes in the cluster in a synchronous state. Its official name is Proxmox Cluster file system (pmxcfs). pmxcfs is a database-driven filesystem used to store configuration files. Any changes made to any files or copied/deleted files in this filesystem get replicated in real time to all the nodes using Corosync. The Corosync cluster engine is a group communication system used to implement High Availability within an application.

                                                                                                                                      Fig3: Proxmox Cluster

Customer can get post production support for Proxmox if needed:

https://www.proxmox.com/en/proxmox-ve/pricing

Proxmox Reference Architecture

                                                                                     Fig4: Proxmox Reference Architecture


KVM supports many more image types than ESXi. Both solutions are compatible with floppy disks, ISO, physical disks and VMDK, which is a file format developed by VMWare. However, KVM also supports the following formats:

  • Folders on host
  • Raw disk
  • Raw partition
  • HDD
  • QCOW
  • QCOW2
  • QED
  • VDI


Minimum Recommended System Requirements


Implementing Network Isolation 

Bridges are like physical network switches implemented in software. All virtual guests can share a single bridge, or you can create multiple bridges to separate network domains. Each host can have up to 4094 bridges.

The installation program creates a single bridge named vmbr0, which is connected to the first Ethernet card. The corresponding configuration in /etc/network/interfaces might look like this:




                                                                                                                                                                                                        Fig 5: Default Configuration using Bridges
Routed Configuration - NAT 


Most hosting providers do not support the above setup. For security reasons, they disable networking as soon as they detect multiple MAC addresses on a single interface. You can avoid the problem by “routing” all traffic via a single interface. This makes sure that all network packets use the same MAC address.


                                                                                                                                 Fig6: NAT Configuration in Proxmox

Note: Some providers allow you to register additional MACs through their management interface. This avoids the problem, but can be clumsy to configure because you need to register a MAC for each of your VMs.

Masquerading allows guests having only a private IP address to access the network by using the host IP address for outgoing traffic. Each outgoing packet is rewritten by iptables to appear as originating from the host, and responses are rewritten accordingly to be routed to the original sender.

Once the Proxmox is Installed success fully, one can see the console as below. 


                                                                                                                        Fig7: Proxmox Console


Proxmox and VMWare Comparison

Proxmox and VMWare are both generally easy to use after they’re installed, although they have significant differences in this regard. VMWare is easier for implementations that require a high degree of clustering and HA since VMWare’s GUI can prepare and add storage, which isn’t possible with Proxmox’s GUI. However, Proxmox’s command-line interface (CLI) is easier to use than ESXi’s CLI, because the base OS for Proxmox is Debian Linux. Administrators can therefore apply their existing Linux knowledge to use the Proxmox CLI. In comparison, VMWare uses a proprietary version of Linux with its own management tools, which will require additional time to learn.

Clustering and HA is much more flexible with Proxmox since it treats all of its nodes as master nodes. Any node can manage a cluster in Proxmox, so cluster management is still possible so long as at least one node is running.

Proxmox is open source while VMWare products are proprietary.

Proxmox is open source, although commercial support for Proxmox and other services are available on a subscription bases. The lack of any fee for the license itself can greatly facilitate the implementation of Proxmox since there won’t be any issues with license compatibility.

VMWare has more features overall than Proxmox. Proxmox has certain key features like it automatically allows nodes to use the same shared storage when the user adds them to a cluster.

The biggest difference in the basic features of Proxmox and VMWare lies in their typical usage. Both solutions are commonly meant for cloud computing and server consolidation. Like VMWare, Proxmox is also used for virtualized server isolation and software development.

Conclusion:

Customers can leverage IBM Cloud Bare metal servers with Proxmox VE to deploy their production or non-production workloads in considerably lower cost and take advantage of IBM Cloud Infrastructure which offers $0 data transfer cost across geographies over its private network and 24X7X365 support for its servers. This way they can reduce their TCO and Increase ROI.  

References
a) https://pve.proxmox.com/wiki/Network_Configuration
b) https://pve.proxmox.com/pve-docs/pve-admin-guide.html
c) https://www.rippleweb.com/vmware-vs-proxmox/
d) https://cloud.ibm.com/docs/bare-metal?topic=bare-metal-about-bm










0 comments
81 views

Permalink