File and Object Storage

IBM Spectrum Scale Sharing Nothing Cluster Performance Tuning

By Archive User posted Fri November 24, 2017 08:23 AM

  
The purpose of this guide is to provide performance tuning procedures for IBM Spectrum Scale Shared Nothing architecture clusters, including the File Placement Optimizer (FPO) clusters. This guide does not include an overall description of the IBM Spectrum Scale product or instructions on the deployment of IBM Spectrum Scale. Before reading this guide, see the IBM Spectrum Scale 4.2 Advanced Administration Guide at IBM Spectrum Scale knowledge center and IBM Spectrum Scale Hadoop WIKI for more information about IBM Spectrum Scale and the data analytics solution using IBM Spectrum Scale-FPO.

Some tuning options cannot be applied after a file system is created. Therefore, it’s recommended to read through this guide entirely before creating any file system.

This guide might be updated periodically. Therefore, see GPFS developerWorks Wiki Analytics Reference sites for the latest version of this guide.


Operating system configuration and tuning
Perform the following steps to configure and tune a Linux system:

Step 1: deadline disk scheduler
Change all the disks defined to IBM Spectrum Scale to use the 'deadline' queue scheduler (cfg is the default for some distros, such as RHEL 6).
For each block device defined to IBM Spectrum Scale, run the following command to enable the deadline scheduler:
echo "deadline" > /sys/block/

Changes made in this manner (echo’ing changes to sysfs) do not persist over reboots. To make these changes permanent, enable the changes in a script that runs on every boot, or (generally preferred) you will have to create a udev rule.

The following sample script sets deadline scheduler for all disks in the cluster that are defined to IBM Spectrum Scale (this example must be run on the node with passwordless access to all the other nodes):#!/bin/bash
/usr/lpp/mmfs/bin/mmlsnsd -X | /bin/awk ' { print $3 " " $5 } ' | /bin/grep dev |
while read device node ; do
device=$(echo $device | /bin/sed 's/\/dev\///' )
/usr/lpp/mmfs/bin/mmdsh -N $node "echo deadline > /sys/block/$device/queue/scheduler"
Done


As previously stated, changes made by echo’ing to sysfs files (as per this example script) take effect immediately on running the script, but do not persist over reboots. One approach to making such changes permanent is to enable a udev rule, as per this example rule to force all block devices to use deadline scheduler after rebooting. To enable this rule, you can create the following file as ‘/etc/udev/rules.d/99-hdd.rules’):
ACTION=="add|change", SUBSYSTEM=="block", ATTR{device/model}=="*", ATTR{queue/scheduler}="deadline"

In the next step, give an example of how to create udev rules that apply only to the devices used by IBM Spectrum Scale.

Step 2: disk IO parameter change
To further tune the block devices used by IBM Spectrum Scale, run the following commands from the console on each node:
echo 16384 > /sys/block/

These block device tuning settings must be large enough for SAS/SATA disks. For /sys/block/
#SpectrumScaleSharedNothingCluster
#Workloadandresourceoptimization
#FPO
#hadoopworkload
#Softwaredefinedstorage
#hbase
#Hadoopperformancetunning
#hive
#performancetuning
#BigDataandAnalytics
#sparkworkload
#Softwaredefinedinfrastructure
4 comments
2 views

Permalink

Comments

Mon November 27, 2017 08:04 AM

Good blog. Look forwarding to seeing more topic about Spectrum Scale blogs.

Mon November 27, 2017 02:17 AM

This message was posted by a user wishing to remain anonymous
[…] Spectrum Scale Sharing Nothing Cluster performance tuning guide has been posted and please refer to link before you doing the below […]

Mon November 27, 2017 02:10 AM

This message was posted by a user wishing to remain anonymous
[…] Spectrum Scale Sharing Nothing Cluster performance tuning guide has been posted and please refer to link before you doing the below […]

Fri November 24, 2017 09:36 AM

This message was posted by a user wishing to remain anonymous
[…] Spectrum Scale Sharing Nothing Cluster performance tuning guide has been posted and please refer to link before you doing the below […]