File and Object Storage

 View Only

IBM Storage Scale CES NFS - 5.2.0 Performance evaluation

By Mara Miranda Bautista posted Thu May 23, 2024 09:54 AM

  

NFS protocol support is an important cornerstone for connectivity to the global data platform built with Storage Scale. It enables users to consolidate various sources of data efficiently in one global namespace. It provides a unified data management solution and enables not only efficient space utilization but also avoids making unnecessary data moves just because access methods might be different.
NFS support is based on the Ganesha NFS server which operates in user space. Storage Scale 5.2.0 implements a new version Ganesha 5.7. During 5.2.0 NFS performance regression evaluation we observed significant IOPS improvement in all supported NFS protocol versions with SPECSFS SWBUILD benchmark compared to the previous version.

The evaluation compares GPFS 5.1.9 + Ganesha 4.3 (RPM packages named 4.3-ibm073.09) against GPFS 5.2.0 + Ganesha 5.7 (RPM packages named 5.7-ibm017.00). The environment used is described in the following figure. 

Relevant GPFS configuration parameters for the evaluation are:
maxFilesToCache set to 8M
maxStatCache set to 10M

The evaluation was done with NFSv3, NFSv4.0, NFSv4.1. Improvements were observed with all three protocol versions, but most noticeably with NFSv4.1. The main contributor of this improvement is the NFS-Ganesha Meta Data Cache component (aka MDCACHE), which was revised in Ganesha 5. MDCACHE provides the basic file-handle cache as well as attribute and directory entry caching. Reference to MDCACHE:  https://github.com/nfs-ganesha/nfs-ganesha/wiki/Stacked-FSAL#mdcache-fsal 

The software build (SWBUILD) workload type is a classic meta-data intensive build workload. Conceptually, these tests are similar to running Unix ‘make’ against several tens of thousands of files. 

  

This workload consumes a lot of CPU and memory.  The runs of this benchmark usually have a similar curve. The overall procedure consist in applying a controlled amount of load and increase it, until the system shows signs of overload (average response time exceeding 10 ms). The following graphic shows a curve generated by SWBUILD increasing load runs, when the curve bends back the system is overloaded (also known as INVALID_RUN). 

  

Before presenting the evaluation numbers it is important to mention that the results shown in this blog are based on the environment described above, a different GPFS cluster setup may yield different numbers. In other words, these results do not characterize IBM Storage Scale, but the combination of IBM Storage Scale, the hardware used in this specific environment, and the selected GPFS configuration parameters.

With NFSv3, the maximum IOPS for SWBUILD when running GPFS 5.1.9 + 4.3-ibm073.09 (blue line in the graphic) was 45,000 IOPS (around this metric is when runs started being invalid); with GPFS 5.2.0 + 5.7-ibm017.00 (red line in the graphic) the maximum IOPS for SWBUILD is 67,500 IOPS. That is 50% performance improvement.
With NFSv4.0, the maximum IOPS for SWBUILD when running GPFS 5.1.9 + 4.3-ibm073.09 (blue line in the graphic) was 17,500 IOPS (around this metric is when runs started being invalid); with GPFS 5.2.0 + 5.7-ibm017.00 (red line in the graphic) the maximum IOPS for SWBUILD is 27,500 IOPS. That is 57% improvement.
With NFSv4.1, the maximum IOPS for SWBUILD when running GPFS 5.1.9 + 4.3-ibm073.09 (blue line in the graphic) was 25,000 IOPS (around this metric is when runs started being invalid); with GPFS 5.2.0 + 5.7-ibm017.00 (red line in the graphic) the maximum IOPS for SWBUILD is 70,000 IOPS. That is 180% performance improvement.
As shown in the following graphic, it is also remarkable that NFSv4.1 (green line in the graphic) performs better or at least on par with NFSv3 (orange line in the graphic). 
  
Starting with IBM Storage Scale 5.2.0 by default NFSv4.0 and NFSv4.1 versions are enabled, i.e. MINOR_VERSIONS=0,1. Clusters that are upgraded from any earlier version retain the default value of only enable NFSv4.0. To modify the MINOR_VERION, use the mmnfs config change MINOR_VERSION=<minorversion> command. Valid values for a minor version are 0 or 1 or 0,1. For more information, see mmnfs command.
  

#Featured-area-2
#Highlights
#Highlights-home
0 comments
66 views

Permalink