IBM FlashSystem

IBM FlashSystem

Find answers and share expertise on IBM FlashSystem


#Storage
 View Only

Understanding IBM Options for Storage Efficiency

By Tony Pearson posted Fri May 24, 2013 09:10 PM

  

Originally posted by: TonyPearson


Are you going to Edge 2013 in Las Vegas, June 10-14?

In my talks with clients about storage, I find similar hesitation on turning on various storage efficiency features that IBM (and other vendors) have to offer. Let's examine a few of them.

  • Less than half of businesses have activated "thin provisioning" on storage devices that support this feature. Why? IBM introduced thin provisioning on its RAMAC Virtual Array back in 1997! The technology is well proven in the field. Don't know how to report this for charge-back activity? Charge your end-users for the maximum capacity upper limit. Simple enough!
     
  • What about Data Deduplication? IBM has had this feature on its N series since 2007, but it wasn't until IBM came out with the IBM ProtecTIER gateway and appliance models that people started to take notice of this technology. Yes, I agree Hash Collisions can be quite scary on competitive gear, but on IBM ProtecTIER we do not use hash codes, and all data is compared byte-for-byte. For those considering hash-based deduplication, hash collisions in general are quite rare. Jeff Preshing does the math for you in his blog post: [Hash Collision Probabilities]. Of course, if you want to leave no doubt in the minds of a jury of your peers, stick with byte-for-byte comparison methods in the IBM ProtecTIER.
     
  • Lastly, I have heard concerns of using real-time compression? Really? Real-time compression has been used in wide-area network (WAN) transmissions ever since IBM developed the Houston Aerospace Spooling Protocol (HASP) for NASA back in 1973. IBM has offered real-time compression on tape cartridges since 1986, the year I started with IBM, some 27 years ago. And now, real-time compression is available for file-based and block-based disk systems. All of these solutions are based on the Lempel-Ziv lossless compression algorithms introduced in 1977. One customer I spoke with was unwilling to try compression, because it requires thin provisioning as a pre-requisite. How is that for having one fear based on another one!

IBM places a high value on data integrity. For each data footprint reduction method, IBM has designed a solution that returns back the exact ones and zeros, in the correct quantity and order, as was originally stored.

For more on this topic, come see me present "Data Footprint Reduction -- Understanding IBM Storage Efficiency Options" at [IBM Edge 2013 conference] in Las Vegas, June 10-14.

Edge2013

technorati tags: , , , , , , ,


#PrimaryStorage
#StorageManagementandReporting
#Storage
0 comments
4 views

Permalink