File and Object Storage

 View Only

IBM Spectrum Scale DAS 5.1.3.1 performance evaluation using COSBench

By SILVANA DE GYVES AVILA posted Fri May 27, 2022 10:36 AM

  

The AWS S3 API [1] established a de-facto standard to process unstructured data as objects. More and more customers integrate the S3 object access protocol in their workflows to acquire, process and manage unstructured data. To better support these evolving workflow requirements, IBM modernized Spectrum Scale’s built-in support for S3 access to data which is stored in Spectrum Scale.
 
Spectrum Scale Data Access Services (DAS) [2] is an extension of IBM Spectrum Scale Container Native (CNSA) [3], which is a containerized version of IBM Spectrum Scale. With this extension, customers can access data stored in Spectrum Scale filesystems as objects. DAS S3 access is based on NooBaa [4] and Red Hat OpenShift Data Foundation (ODF) [5].
 
To evaluate the performance of DAS, an environment with a dedicated DAS cluster was set up and COSBench was used to perform the measurements. COSBench is an open-source benchmarking tool developed by Intel, used to assess the performance of cloud object storage services [6]. It is considered as de-facto standard for object storage evaluations and can be used to perform write and read tests with different workload characteristics.
 
The purpose of this blog entry is to describe the performance evaluation executed on a bare-metal DAS environment, including high-level configuration details, performance tunings applied, and results.
 

Benchmark Environment

The environment configured for the performance evaluation is illustrated in Figure 1. It has five main components:
  • COSBench cluster: 4 COSBench nodes, with 12 drivers and 1 controller, running on RHEL 8.3.
  • Shared data network: shared 100Gb Ethernet network for S3 access.
  • IBM Spectrum Scale DAS cluster: dedicated compact Red Hat OpenShift 4.9 cluster running on bare-metal x86_64 servers. Compact Red Hat OpenShift clusters are three-node clusters in which each Red Hat OpenShift node acts as combined master and worker node. These nodes are referred as Data Access Nodes (DAN).
  • Dedicated data network: dedicated 200Gb Ethernet network for Spectrum Scale and OpenShift.
  • Storage cluster: dedicated ESS 3200 [7].
Benchmark environment
Fig. 1. Benchmark environment.
 
The software levels used for the evaluation are listed as follows:
  • Spectrum Scale Storage Cluster: Spectrum Scale 5.1.3.1
  • Spectrum Scale CNSA Cluster: Spectrum Scale CNSA 5.1.3.1
  • Spectrum Scale CSI: Spectrum Scale CSI 2.5.1
  • Spectrum Scale DAS: Spectrum Scale DAS 5.1.3.1
  • OpenShift: OCP 4.9.31
  • OpenShift Storage: ODF 4.9.7-2
  • COSBench: 0.4.2.c4
 

Performance Tuning
Performance engineering was done to optimize the benchmark environment. It is important to highlight that performance depends on the underlying architecture (hardware, software, configuration) and the workload. The outcome of this work was a set of parameters adjusted in different layers of the setup.
 
IBM Spectrum Scale enables customers to change the Spectrum Scale configuration parameters. For this test, the maxTcpConnsPerNodeConn [8] value was assigned to 8 in both Spectrum Scale installations (Storage cluster and DAS cluster).
 
In the storage cluster, the following command was executed, then Spectrum Scale was restarted.

  mmchconfig maxTcpConnsPerNodeConn=8

 
In the DAS cluster, the cluster resource was modified by adding the new configuration parameter in the cluster profile.

  spec:     
     daemon:
  
    clusterProfile:
    
     maxTcpConnsPerNodeConn: "8"

 
Spectrum Scale DAS provides the mmdas CLI command that enables customers to adjust the configuration of the DAS service. For this test, the scale factor was set to 12. With this, DAS creates a total of 36 NooBaa endpoints (12 per DAN node).
 
From a management node, the following command was executed to perform the adjustments.

  mmdas service update s3 --scaleFactor=12

 
 
Tests Description
A COSBench cluster was defined with 12 drivers, distributed in 4 physical nodes, using 3 IP addresses to communicate with the DAS cluster. Measurements were gathered with the following configuration: 

  • Buckets: 10 
  • Objects: 100 (evenly distributed in the 10 buckets)
  • Test duration: 5 minutes per work-stage
  • Workers: 1, 8, 32, 64, 128, 256, 512
  • Object size: 1GB 
  • Operation: 100% read, 100% write

 
During the tests’ execution, bandwidth, and success ratio were monitored for each workload using the COSBench Web console. In addition, OpenShift Grafana dashboards were used to monitor the resources utilization from the NooBaa endpoints perspective (aggregated DAN nodes resources).
 

Performance Results
For the read tests with 1GB objects, the max bandwidth measured was 60.08 GB/s and a success ratio of 100%, as illustrated in Figure 2.  For the write test, illustrated in figure 3, the max bandwidth measured was 23.15 GB/s and a success ratio of 100%.

COSBench Web console summary
Bandwidth per workstage increasing number of COSBench workers

Fig. 2. Read performance results. (a) COSBench Web console summary. (b) Bandwidth per work-stage increasing number of COSBench workers.

Write - COSBench Web console summary
Bandwidth per workstage increasing number of COSBench workers


Fig. 3. Write performance results. (a) COSBench Web console summary. (b) Bandwidth per work-stage increasing number of COSBench workers.

 
Resources utilization corresponding to the NooBaa endpoints while running the read test is illustrated in Figures 4 and 5, while bandwidth in Figure 6. It can be observed that based on the number of COSBench workers used during the test (1, 8, 32, 64, 128, 256, 512), there are variations in CPU usage and receive/transmit bandwidth. From 19:36 to 19:58, the time corresponding to the execution of work-stages with 1, 8, 32 and 64 workers, there is an increment in terms of CPU usage and receive/transmit bandwidth, while from 19:58 to the end of the test, these values remain in a similar range. This behavior can be mapped to the obtained results, where there is increment in the observed performance for the first work-stages of the test, and the last ones display similar results. Memory utilization remains constant during the test execution.

Grafana compute resources pods dashboard - CPU usage

Fig. 4. Grafana compute resources pods dashboard - CPU usage.

Grafana compute resources pods dashboard - memory usage

Fig. 5. Grafana compute resources pods dashboard - memory usage.

Grafana compute resources pods dashboard - bandwidth

Fig. 6. Grafana compute resources pods dashboard - bandwidth.

 

Resources utilization corresponding to the NooBaa endpoints during the write test is illustrated in Figures 7 and 8, bandwidth is illustrated in Figure 9. It is shown that based on the number of workers used during the test (1, 8, 32, 64, 128, 256, 512), CPU usage and receive/transmit bandwidth values changed. They increased in a high rate until 17:50, then, remain in similar range until the end of the test. This behavior can be mapped to the obtained performance results, where for the last 2 work-stages, results are closer as compared to the previous ones. Like the read test, memory utilization remains constant during the test execution.

Grafana compute resources pods dashboard - CPU usage

Fig. 7. Grafana compute resources pods dashboard - CPU usage.

Grafana compute resources pods dashboard - memory usage

Fig. 8. Grafana compute resources pods dashboard - memory usage.

Grafana compute resources pods dashboard - bandwidth

Fig. 9. Grafana compute resources pods dashboard - bandwidth.

 

Conclusion and Next Steps

This blog entry described a set of tests executed to evaluate the performance of the IBM Data Access Service (DAS), using COSBench and large objects (1GB). It also provided some tunings that were done to improve the performance of the tests. With the current cluster setup, the bandwidth measured for DAS S3 for read operations is 60 GB/s and 23 GB/s for writes. Performance engineering work will continue and the execution of diverse tests. In future entries, we will describe performance evaluations using COSBench with different workload characteristics [9] as well as other benchmarking tools.
 
 

References

[1] Amazon S3 REST API Introduction. https://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html
[2] IBM Spectrum Scale Data Access Services. https://www.ibm.com/docs/en/scalecontainernative?topic=513-spectrum-scale-data-access-services

[3] IBM Spectrum Scale Container Native. https://www.ibm.com/docs/en/scalecontainernative
[4] NooBaa Software Defined Storage.
https://www.noobaa.io/
[5] Red Hat OpenShift Data Foundation.
https://www.redhat.com/en/technologies/cloud-computing/openshift-data-foundation
[6] COSBench - Cloud Object Storage Benchmark.
https://github.com/intel-cloud/cosbench
[7] IBM Elastic Storage System 3200 data sheet. https://www.ibm.com/downloads/cas/MQ4MY4WV
[8] IBM Spectrum Scale: multi-connection over TCP (MCOT): tuning may be required. https://www.ibm.com/support/pages/node/6446651
[9] IBM IBM Data Access Services 5.1.6 read performance evaluation of small objects using COSBench. https://community.ibm.com/community/user/storage/blogs/silvana-de-gyves-avila1/2023/01/11/ibm-data-access-services-516-read-performance-eval


0 comments
86 views

Permalink