As one can see in the image above, IBM Cloud data center comprises of a
Front End Customer Router (FCR) and a
Backend Customer Router (BCR) through which server racks are connected. This has a
Front End Customer Switch (FCS) and a
Back End Customer Switch (BCS) as well, which are not depicted in the above block diagram. However, in the detailed architecture diagram below, one can find these components. All front end components are connected to public network while back end components are connected to private network of IBM Cloud.
Each IBM Cloud Data Center has three or more pods with each pod having server racks holding multiple servers, network, cooling components etc.
Physical Bare-metal servers are part of these server racks, which holds
ESXi hosts to maintain VMWare Infrastructure. Two or more such data centers constitute a
zone and multiple such zones constitute a
region. These
zones and
regions are connected to each other through a
private network backbone. When multiple zones constitute a solution, we refer them as
Muti-Zone-Region (MZR) solutions. Each cloud service provider has their own Implementation specifics for MZR. We will discuss in detail the MZR specific implementation in an upcoming article. However, in this article we will discuss VMWare NSX-T implementation on IBM Cloud.
VMWare on IBM CloudVMWare offers virtualization of infrastructure in terms of Compute, Store and Network. For each virtualization, VMWare has different offerings - ESXi for compute, vSAN for Storage and NSX-V or NSX-T for network. vSAN is an optional component as you can also use iSCSI or NFS from storage perspective. However NSX-T needs to be implemented to exploit the networking capabilities of VMWare like multi-tenancy. When it comes to IBM Cloud, VMWare comes with automated deployment with IBM provided license or BYOL option. Customer can however opt to do their own customized implementations as well by selecting specific hardware configurations. IBM offers VMWare certified hardware for VMWare based workload deployment. If customers have SAP workloads, they have the option to select SAP certified hardware also on top of VMWare certified hardware on IBM Cloud. For details, one can refer to the IBM Cloud docs.
VMWare is deployed on Bare-metal servers on
Classic Infrastructure on IBM Cloud .
NSX-T Implementation This article assumes that the reader is aware of NSX-T architecture and its theoretical details. Hence we will not get into the details of NSX-T and keep our focus on NSX-T implementation on IBM Cloud. NSX-T is part of NSX manager.
NSX manager gets deployed on private network on IBM Cloud which is connected to
BCR.
NSX-T has two virtual routers referred to as
Tier-0 and
Tier-1. Tier-0 router is connected to data center router and synchronized through Border Gateway Protocol (
BGP) protocol.
Tier-0 logical routers have downlink ports to connect to NSX-T Data Center tier-1 logical routers and uplink ports to connect to external networks.
While creating Tier-0 gateway, we need to configure uplink interfaces to top-of-the-rack (TOR) to form BGP neighbor ship. To connect your uplink to TOR we need VLAN based logical switches which are provided by IBM Cloud Classic Network Infrastructure. One can see in the below diagram, how a Tier-0 router gets connected to Backend Customer Switch on the private network and further to Backend Customer Router.
Note: The image is just for depiction. Subnets will vary as per IBM Cloud Specifications. One should refer the IBM Cloud documentation for subnet details.
Below diagram depicts the components which are managed by IBM.
Each NSX Edge node can have two uplinks. One for public network and one for private network. As one can see below, any connectivity through FCR will connect public network, while through BCR it will be private network.
One can refer standard reference architectures mentioned on IBM Cloud Docs.
References:
a) https://cloud.ibm.com/docs/vmwaresolutions?topic=vmwaresolutions-managing_imi
b) https://www.vmware.com/in/cloud-solutions/ibm-cloud.html