Originally posted by: Nils Haustein
By Andre Gaschler and Nils Haustein
Elastic storage lives from its use cases. One use case has been around for years: Running TSM servers on GPFS storage.
We would like to elaborate on this use case and demonstrate some outstanding performance test results.
For those who don’t know those product, IBM® Tivoli® Storage Manager products provide backup, archive, recovery, space management, database and application protection, and bare machine recovery and disaster recovery capabilities. They can help protect a wide range of systems, including virtual machines, file servers, email, databases, enterprise resource planning (ERP) systems, mainframes and desktops from a single administration interface. TSM is made to protect critical data requiring continuous availability and gain high-efficiency backup of key business applications—with virtually no backup-related performance impact.
The IBM General Parallel File System (GPFS™) is a high-performance cluster file system optimized to provide concurrent high-speed file access to applications executing on multiple nodes in the cluster.
In such an environment, one or more TSM servers run on GPFS servers and use the GPFS file systems for storage pools and the database. This provides simplified management because all TSM servers are managed in one GPFS cluster. It also provides scale out capabilities because it is so easy to scale a GPFS cluster. Last but not least, the outstanding I/O characteristics of GPFS provide out-scaling performance for the TSM data (storage pool and database).
Recognizing the symbiosis of TSM and GPFS we conducted a series of tests with TSM on IBM System x GPFS Storage Server (GSS). The GSS system provides standard GPFS file systems which are configured on GPFS native RAID devices (GNR). The TSM server software runs on servers connected to the GSS files system via high speed network connections. The tests were focused toward interoperability and performance. We would now like to share the outstanding performance test results.
For this test we borrowed a GSS system from our IBM Research colleagues in Almaden, special thanks to Sven Oehme. We used one GSS system and two systems x servers, each hosting the TSM server and client software. The two server systems have been configured as GPFS NSD clients accessing the file systems provided by GSS via an Infiniband network.
The following components have been used:
2 x TSM Server
IBM x3650-M4 with Red Hat Enterprise Linux Server release 6.5
IBM Tivoli Storage Manager 7.1
1 x IBM System x GPFS Storage Server - GSS26
6 x 4U-60 with 58 x 2 TB NL-SAS disks drawer
In total 348 disks
1 x Mellanox 32 Port InfiniBand FDR switch
Each TSM server is connected with a 56 GBit/s link to the GSS system
The picture below shows the test setup in more details:
Test Result Summary
In a short summary we measured the following performance numbers:
Peak backup performance using multiple sessions for a single TSM server is 5,4 Giga Byte per Second (GB/s)
Peak backup performance using multiple sessions for two TSM servers is 4,5 GB/s per server or 9 GB/s in total
Peak restore performance using multiple sessions for a single TSM server is 6,5 GB/s
Peak backup performance using a single session for a single TSM server is 2,5 GB/s
These performance measurements clearly indicate that the TSM server performance scales linearly on GSS. We included some variations, like configuring the TSM database on an extra IBM Flash System 820. This has shown minimal performance improvement for backup and restore. However housekeeping operations like Expire Inventory or Backup Database run much faster in this configuration.
The superior GSS performance combined with operational simplification represents a perfect storage environment for TSM. Multiple TSM instances can scale-out in multiple dimensions in an elastic – GPFS based – storage cloud.