IBM FlashSystem

IBM FlashSystem

Find answers and share expertise on IBM FlashSystem

 View Only

IBM Solid State Disk in eX5 servers

By Tony Pearson posted Thu March 11, 2010 04:26 PM

  

Originally posted by: TonyPearson


This week I got a comment on my blog post [IBM Announces another SSD Disk offering!]. The exchange involved Solid State Disk storage inside the BladeCenter and System x server line. Sandeep offered his amazing performance results, but we have no way to get in contact with him. So, for those interested, I have posted on SlideShare.net a quick five-chart presentation on recent tests with various SSD offerings on the eX5 product line here:

Sandeep, if you see this, we would also be interested in seeing your results as well.

technorati tags: , , , , , ,

2 comments
7 views

Permalink

Comments

Fri March 12, 2010 09:10 AM

Originally posted by: TonyPearson


Andrew, yes, putting a little data across many drives is known as "short-stroking", a way to increase performance by having as many physical arms and spindles to access the data. So, IBM offers three ways to take advantage of SSD. First, we offer SSD inside our SVC, DS5000 and DS8000 as resident storage. This has the advantage that multiple hosts connected to these can take advantage of the faster IOPS. With the SSD, you can easily move data into SSD as needed, and back down to spinning disk to make room for something else. The SVC allows you to RAID1 between SSD and spinning disk, which greatly reduces the cost of protecting your data against unexpected hardware loss. In most other cases, clients find RAID-5 for SSD adequate protection, as SSD do not have moving parts and do not fail as often as spinning disk. Second, we offer SSD as "Non-volatile cache" in our N series. These are called Performance Accelerator Modules or PAM. SSD are slower than DRAM used for this, but SSD is less expensive and therefore we can have much larger capacities than most cache in systems today. This allows SSD to be simply part of the caching algorithm, eliminating the touch decisions of what goes on SSD and what doesn't. Third, we offer SSD in our various servers. This limits the SSD to that individual machine, but allows substantially better performance at PCIe bus speeds rather than being concerned about SAN fabric links and/or controller bandwidth capacities. This can improve speed for boot and reboots, provide SSD across several VMs when using VMware or Hyper-V, and improve IOPS for specific applications. -- Tony

Fri March 12, 2010 01:44 AM

Originally posted by: Andrew_Larmour


Hi Tony, interesting presentation - thanks for posting. I've been wondering for a while about replacing spinning disks with SSDs in applications where very high I/O rates are needed. For instance, when I size some software products, in order to achieve the database I/O needed to sustain high transaction rates, I am having to specify large RAID 1+0 arrays. The size of these disks in this array is almost irrelevant - if 1Gb disks were still available, then the combined size in the array would easily suffice in almost all applications. These arrays are often 40 to 100 disks in size. The space required would typically be less than 5Gb. As you can see, by the time we put in the minimal disk (136Gb(?) these days) and have an array of (say) 40 disks, the available capacity far outweighs the required space. . I've been thinking that a smaller RAID 1+0 array of SSDs that takes advantage of the I/O speed increase could potentially meet the performance requirements. Not to mention the green aspect with reduced power requirements. Obviously the array would need to be able to adequately handle the increased throughput of the SSDs on a per drive basis and I am guessing that those don't exist yet. The other thing I am not sure about the write performance though with SSDs and how that would effect the overall equation. . What do you think?