PowerVM

 View Only

Best Practices Guide for SSP Large Cluster Configuration

By Rob Gjertsen posted Thu June 25, 2020 06:34 PM

  

Introduction

IBM® PowerVM Shared Storage Pools (SSP), is a pool of storage to be shared among different Virtual I/O servers on single or multiple host servers. SSP is based on cluster of Nodes and a distributed data object repository. SSP node (or node) can be a VIOS instance that is a member of an SSP cluster. SSP requires a minimum of 2 Nodes in the cluster. As we increase the number of Nodes in the clusters, it becomes important to ensure the performance of the storage exposed by SSP to the larger number of systems in the cluster. Based on our experiment, we list best practices to achieve optimum performance on a 24 Node (2000 Virtual Machines) SSP cluster.

2. Hardware Environment

Host Servers - Power Servers:

24 Node cluster was built on multiple IBM Power servers and a total of 2000 logical partitions were spread across these nodes

  • Spread Nodes across multiple host (Power) servers.  Concentrating nodes or clients on too few servers can cause poor performance. On mid-range systems (32 cores and less) we setup 2 Nodes per Host server and on high end systems (> 64 cores) we setup 4 Nodes per host server. When possible we spread logical partitions or virtual machines equally across all host servers.
  • For Node Sizing and system configuration details refer to 
  • For setting up 2000 logical partitions spread across 24 nodes and multiple host servers we used four HMCs and spread host servers equally among these HMCs. If host server is planned to have 500 or more logical partitions on it, it is best to have an HMC managing it exclusively.
  • If using NovaLink instead of HMC for virtualization management, NovaLink can support up to 1000 logical partitions per host server. For information on NovaLink partition requirements, refer to http://www.ibm.com/support/knowledgecenter/POWER8/p8eig/p8eig_requirements.htm
  • All the Nodes that are a part of the cluster should be configured with Shared Ethernet Adapter (SEA) or SR-IOV (PowerVC VM deployment support with SR-IOV is available from PowerVC 1.3.2 Release) so that the clients can easily connect with the Nodes.
  • All Nodes should have a minimum of 8Gbps Fibre Channel (FC) adapters connected to the switches.
  • Ensure each Node has enough FC adapters and overall number of FC ports to satisfy the anticipated iops and GBps that the Node will be expected to supply to all the clients expected to run at the same time.
  • For best CPU-Memory affinity across all the Nodes and logical partitions on each host server, it is recommended to clear all the partition data on the Power server using “Server -> Configuration -> Manage partition data -> Initialize” from HMC, before creating logical partitions on the host server. This helps partitions to be placed optimally based on the partition configuration and hardware topology on the system.

Note: Initialize partition data will wipe out all the logical partitions present on the host server, hence if there are some logical partitions which can be deleted, HMC command “chhwres” should be used to clean the assigned resources on every logical partition. Refer to command line documentation to use HMC command “chhwres”.

https://www.ibm.com/support/knowledgecenter//8286-42A/p8edm/p8edm_kickoff.htm

2.1 Storage, Shared Storage Pool & Storage Switch Setup

Care must be taken to configure the storage systems and fabric for optimal performance for both the virtual machine images and virtual machine data/additional storage space. Shared storage pool tiers provide a means to control virtual storage allocation within the pool to meet the unique requirements you may have for this data.  Rather than treating the physical disks as a homogenous resource, storage tiers allow the disks to be classified and grouped by the administrator.  This could be useful for classifying storage based on performance characteristics so that, for example, someone could create a fast tier, medium tier, and slow tier. 

Refer to the shared storage pool documentation in the below links for setting up the pool.

Readme for VIOS 2.2.5.10 is at

Release notes for VIOS release 2.2.5.10 is at

      Read the following Redbook for more details on SSP,

http://www.redbooks.ibm.com/redbooks/pdfs/sg247590.pdf

The following blog covers some of the FAQs on SSP:
https://community.ibm.com/community/user/power/blogs/rob-gjertsen1/2020/06/25/powervm-shared-storage-pool-enhancements?CommunityKey=71e6bb8a-5b34-44da-be8b-277834a183b0&tab=recentcommunityblogsdashboard
    
 

Recommendations on setting up storage for the cluster.

  • The FC adapter bandwidth (GBps) and the storage bandwidth (GBps) should withstand the total bandwidth required for all the clients for the given node. The iops on storage should be able to address the IO requirements from all the clients on the node. 
  • A 24 node SSP cluster requires SSD flash in the System Tier. Using flash storage for system tier for large clusters helps to meet overall iops requirements by quick meta-data access. Calculate your total iops requirements to determine your storage system configuration. In our experiment, for the 24-Node, 2000 virtual machine cluster environment,we controlled client side iops to be within the storage system limit.
  • Spread storage LUNs across multiple physical drives to have the physical IOs spread across multiple drives.
  • Ensure that a storage switch zone is not loaded with many initiator ports per target port. In our environment we configured 3 initiator ports for each target port.
  • Adapter firmware can play a role in IO performance, always check with the vendor for the latest firmware release and update to the latest level.

Connect the storage switches with trunk license configured, to get more IO bandwidth across the switches. E.g. if switches are connected without trunk license, the IO across the switch is through single line, in which case it can choke the IO Path.

Recommendations on Shared Storage pools

  • Pools should be configured with multi-tier, at a minimum one System tier and one User tier. However, if the number of virtual machines using the tiers are large and/or perform heavy I/Os, it is recommended to have more user tiers such that a single user tier is accessed by a small group of Nodes. Configuring fewer Nodes per tier helps in achieving optimum performance when compared to all virtual machines accessing storage from one single user tier. For the 24 Node setup, we configured six user tiers to provide data disks to all virtual machines for the workloads. Since all the clients were expected to have high IO load, only three Nodes shared one user tier.
  • Make a judicious decision on the type of Logical Units (LU) that are created; while thin LU helps in better utilization of storage space, it comes with a performance penalty. When performance is a primary concern, it is good to have a thick LU configured.
  • Create tiers with optimal size. Adding new physical volumes (PVs) does re-striping which would take some IO bandwidth and movement of data during active usage.
  • Create tiers with multiple LUNs on storage; this helps to spread IOs across LUNs.

2.2 Sample Storage - FC Switch - Node Connection on 24 Node Cluster

In our test environment, storage consisted of IBM FlashSystem® 900 (FS900), FlashSystem 840 (FS840).  Two Brocade® 48 Port FC Storage Switches were used.  10 POWER7 host servers were used.

 

In the figures below, the following naming conventions are used to describe this environment:

  • SxVy – where x is an identifier for the host server and y is an identifier for the Node.
  • DTx  -- where x is an identifier for a Target Port connection from FS900 flash storage (used for data disks on clients)
  • STx – where x is an identifier for a System Tier from FS840 flash storage.

 

 

  • Four zones on each storage switch were created wherein each zone was connected to three initiator ports from nodes on host servers and one target port from user tier storage to distribute the load equally across the target ports.
  • Two zones on each storage switch were created wherein each zone was connected to six initiator ports from nodes on host servers and one target port from System tier storage.

3 PowerVC Environment

3.1 PowerVC Ecosystem

Power Virtualization Center (PowerVC) is our advanced virtualization management and cloud management offering that simplifies creating and managing virtual machines on IBM Power Systems™ servers using PowerVM®. We used PowerVC for deploying 2000 VMs across 24 nodes within SSP cluster.  This test was performed with Host Servers connected to HMC and PowerVC managing the servers through the HMC. PowerVC can also manage NovaLink managed hosts.

 

Key Recommendations:

  • Adhere to the minimum hardware requirements for the PowerVC controller.  Additional hardware resources (especially memory) may be beneficial during periods of heavy activity.

https://www.ibm.com/support/knowledgecenter/SSXK2N_1.3.2/com.ibm.powervc.standard.help.doc/powervc_hwandsw_reqs_hmc.html

  • For information on installing and configuring PowerVC, see the Knowledge Center documentation and the "IBM PowerVC Version 1.3.2 Introduction and Configuration including IBM Cloud PowerVC Manager" RedBook:

http://www.redbooks.ibm.com/abstracts/sg248199.html

  • Use the PowerVC "Verify Environment" function to identify any issues in the configuration before deploying VMs
  • PowerVC only supports a single tier of SSP storage (current limitation only).  If you are using multiple tiers, use PowerVC for the initial VM deployment with the boot disks in the default tier, and then manually attach any additional volumes that are required from other storage tiers.
  • Before capturing your base VM image, be sure that cloud-init is installed and configured appropriately, and RMC (rsct) is installed.
  • Ensure that each Node is configured with a sufficient number of virtual adapters for the number of client VMs it will be hosting.  A general guideline is at least 3 virtual adapters per client.
  • All HMCs connected to PowerVC should be on latest HMC software level.
  • All Nodes should have SEAs configured on them.

Contacting the PowerVM Team

Have questions for the PowerVM team or want to learn more?  Follow our discussion group on LinkedIn IBM PowerVM or IBM Community Discussions

0 comments
38 views

Permalink