Tape Library

Tape Library

Cyber resilient, energy-efficient tape storage with airgap and long-term retention

 View Only

Eleven Answers about Deduplication from IBM

By Tony Pearson posted Fri May 30, 2008 04:59 PM

  

Originally posted by: TonyPearson


Continuing my catch-up on past posts, Jon Toigo on his DrunkenData blog, posted a ["bleg"] for information aboutdeduplication. The responses come from the "who's who" of the storage industry, so I will provide IBM'sview. (Jon, as always, you have my permission to post this on your blog!)
  1. Please provide the name of your company and the de-dupe product(s) you sell. Please summarize what you think are the key values and differentiators of your wares.



    IBM offers two different forms of deduplication. The first is IBM System Storage N series disk system with Advanced Single Instance Storage (A-SIS), and the second is IBM Diligent ProtecTier software. Larry Freeman from NetApp already explains A-SIS in the [comments on Jon's post], so I will focus on the Diligent offering in this post. The key differentiators for Diligent are:

    • Data agnostic. Diligent does not require content-awareness, format-awareness nor identification of backup software used to send the data. No special client or agent software is required on servers sending data to an IBM Diligent deployment.
    • Inline processing. Diligent does not require temporarily storing data on back-end disk to post-process later.
    • Scalability. Up to 1PB of back-end disk managed with an in-memory dictionary.
    • Data Integrity. All data is diff-compared for full 100 percent integrity. No data is accidentally discarded based on assumptions about the rarity of hash collisions.



  2. InfoPro has said that de-dupe is the number one technology that companies are seeking today — well ahead of even server or storage virtualization. Is there any appeal beyond squeezing more undifferentiated data into the storage junk drawer?

    Diligent is focused on backup workloads, which has the best opportunity for deduplication benefits. The two main benefits are:

    • Keeping more backup data available online for fast recovery.
    • Mirroring the backup data to another remote location for added protection. With inline processing, only the deduplicated data is sent to the back-end disk, and this greatly reduces the amount of data sent over the wire to the remote location.



  3. Every vendor seems to have its own secret sauce de-dupe algorithm and implementation. One, Diligent Technologies (just acquired by IBM), claims that their’s is best because it collapses two functions — de-dupe then ingest — into one inline function, achieving great throughput in the process. What should be the gating factors in selecting the right de-dupe technology?

    As with any storage offering, the three gating factors are typically:

    • Will this meet my current business requirements?
    • Will this meet my future requirements for the next 3-5 years that I plan to use this solution?
    • What is the Total Cost of Ownership (TCO) for the next 3-5 years?

    Assuming you already have backup software operational in your existing environment, it is possible to determine thenecessary ingest rate. How many "Terabytes per Hour" (TB/h) must be received, processed and stored from the backup software during the backup window. IBM intends to document its performance test results of specific software/hardwarecombinations to provide guidance to clients' purchase and planning decisions.

    For post-process deployments, such as the IBM N series A-SIS feature, the "ingest rate" during the backup only has to receive and store the data, and the rest of the 24-hour period can be spent doing the post-processing to find duplicates. This might be fine now, but as your data grows, you might find your backup window growing, and that leaves less time for post-processing to catch up. IBM Diligent does the processing inline, so is unaffected by an expansion of the backup window.

    IBM Diligent can scale up to 1PB of back-end data, and the ingest rate does not suffer as more data is managed.

    As for TCO, post-process solutions must have additional back-end storage to temporarily hold the data until the duplicates can be found. With IBM Diligent's inline methodology, only deduplicated data is stored, so less disk space is required for the same workloads.

  4. Despite the nuances, it seems that all block level de-dupe technology does the same thing: removes bit string patterns and substitutes a stub. Is this technically accurate or does your product do things differently?

    IBM Diligent emulates a tape library, so the incoming data appears as files to be written sequentially to tape. A file is a string of bytes. Unlike block-level algorithms that divide files up into fixed chunks, IBM Diligent performs diff-compares of incoming data with existing data, and identifies ranges of bytes that duplicate what already is stored on the back-end disk. The file is then a sequence of "extents" representing either unique data or existing data. The file is represented as a sequence of pointers to these extents. An extent can vary from2KB to 16MB in size.

  5. De-dupe is changing data. To return data to its original state (pre-de-dupe) seems to require access to the original algorithm plus stubs/pointers to bit patterns that have been removed to deflate data. If I am correct in this assumption, please explain how data recovery is accomplished if there is a disaster. Do I need to backup your wares and store them off site, or do I need another copy of your appliance or software at a recovery center?

    For IBM Diligent, all of the data needed to reconstitute the data is stored on back-end disks. Assuming that all of your back-end disks are available after the disaster, either the original or mirrored copy, then you only need the IBM Diligent software to make sense of the bytes written to reconstitute the data. If the data was written by backup software, you would also need compatible backup software to recover the original data.

  6. De-dupe changes data. Is there any possibility that this will get me into trouble with the regulators or legal eagles when I respond to a subpoena or discovery request? Does de-dupe conflict with the non-repudiation requirements of certain laws?

    I am not a lawyer, and certainly there are aspects of[non-repudiation] that may or may not apply to specific cases.

    What I can say is that storage is expected to return back a "bit-perfect" copy of the data that was written. Thereare laws against changing the format. For example, an original document was in Microsoft Word format, but is converted and saved instead as an Adobe PDF file. In many conversions, it would be difficult to recreate the bit-perfect copy. Certainly, it would be difficult to recreate the bit-perfect MS Word format from a PDF file. Laws in France and Germany specifically require that the original bit-perfect format be kept.

    Based on that, IBM Diligent is able to return a bit-perfect copy of what was written, same as if it were written to regular disk or tape storage, because all data is diff-compared byte-for-byte with existing data.

    In contrast, other solutions based on hash codes have collisions that result in presenting a completely different set of data on retrieval. If the data you are trying to store happens to have the same hash code calculation as completely different data already stored on a solution, then it might just discard the new data as "duplicate". The chance for collisions might be rare, but could be enough to put doubt in the minds of a jury. For this reason, IBM N series A-SIS, that does perform hash code calculations, will do a full byte-for-byte comparison of data to ensure that data is indeed a duplicate of an existing block stored.

  7. Some say that de-dupe obviates the need for encryption. What do you think?

    I disagree. I've been to enough [Black Hat] conferences to know that it would be possible to read thedata off the back-end disk, using a variety of forensic tools, and piece together strings of personal information,such as names, social security numbers, or bank account codes.

    Currently, IBM provides encryption on real tape (both TS1120 and LTO-4 generation drives), and is working withopen industry standards bodies and disk drive module suppliers to bring similar technology to disk-based storage systems.Until then, clients concerned about encryption should consider OS-based or application-based encryption from thebackup software. IBM Tivoli Storage Manager (TSM), for example, can encrypt the data before sending it to the IBMDiligent offering, but this might reduce the number of duplicates found if different encryption keys are used.

  8. Some say that de-duped data is inappropriate for tape backup, that data should be re-inflated prior to write to tape. Yet, one vendor is planning to enable an “NDMP-like” tape backup around his de-dupe system at the request of his customers. Is this smart?

    Re-constituting the data back to the original format on tape allows the original backup software to interpret the tape data directly to recover individual files. For example, IBM TSM software can write its primary backup copies to an IBM Diligent offering onsite, and have a "copy pool" on physical tape stored at a remote location. The physical tapes can be used for recovery without any IBM Diligent software in the event of a disaster. If the IBM Diligent back-end disk images are lost, corrupted, or destroyed, IBM TSM software can point to the "copy pool" and be fully operational. Individual files or servers could be restored from just a few of these tapes.

    An NDMP-like tape backup of a deduplicated back-end disk would require that all the tapes are in-tact, available, and fully restored to new back-end disk before the deduplication software could do anything. If a single cartridge fromthis set was unreadable or misplaced, it might impact the access to many TBs of data, or render the entire systemunusable.

    In the case of a 1PB of back-end disk for IBM Diligent, you would be having to recover over a thousand tapes back to disk before you could recover any individual data from your backup software. Even with dozens of tape drives in parallel, could take you several days for the complete process.This represents a longer "Recovery Time Objective" (RTO) than most people are willing to accept.

  9. Some vendors are claiming de-dupe is “green” — do you see it as such?

    Certainly, "deduplicated disk" is greener than "non-deduplicated" disk, but I have argued in past posts, supportedby Analyst reports, that it is not as green as storing the same data on "non-deduplicated" physical tape.

  10. De-dupe and VTL seem to be joined at the hip in a lot of vendor discussions: Use de-dupe to store a lot of archival data on line in less space for fast retrieval in the event of the accidental loss of files or data sets on primary storage. Are there other applications for de-duplication besides compressing data in a nearline storage repository?

    Deduplication can be applied to primary data, as in the case of the IBM System Storage N series A-SIS. As Larrysuggests, MS Exchange and SharePoint could be good use cases that represent the possible savings for squeezing outduplicates. On the mainframe, many master-in/master-out tape applications could also benefit from deduplication.

    I do not believe that deduplication products will run efficiently with “update in place” applications, that is high levels of random writes for non-appending updates. OLTP and Database workloads would not benefit from deduplication.

  11. Just suggested by a reader: What do you see as the advantages/disadvantages of software based deduplication vs. hardware (chip-based) deduplication? Will this be a differentiating feature in the future… especially now that Hifn is pushing their Compression/DeDupe card to OEMs?

    In general, new technologies are introduced on software first, and then as implementations mature, get hardware-based to improve performance. The same was true for RAID, compression, encryption, etc. The Hifn card does "hash code" calculations that do not benefit the current IBM Diligent implementation. Currently, IBM Diligent performsLZH compression through software, but certainly IBM could provide hardware-based compression with an integrated hardware/software offering in the future. Since IBM Diligent's inline process is so efficient, the bottleneck in performance is often the speed of the back-end disk. IBM Diligent can get improved "ingest rate" using FC instead of SATA disk.

Sorry, Jon, that it took so long to get back to you on this, but since IBM had just acquired Diligent when you posted, it took me a while to investigate and research all the answers.

technorati tags: , , , , , , , , , , , , , , , , , , , , , , , , , , , , ,

8 comments
6 views

Permalink

Comments

Thu June 28, 2012 03:00 PM

Originally posted by: TonyPearson


Steve, I met with the folks from Actifio at the IBM Edge conference in Orlando, Florida. The short answer is that TSM and ProtecTIER are focused on backing up data at the file level, and Actifio works exclusively at the block level. Basically, through an OEM agreement with IBM, they are running their proprietary sotware on IBM hardware, providing all the basic storage hypervisor features like FlashCopy, Remote Mirroring and Volume Mirror, while adding deduplication and block-level Continuous Data Protection (CDP) capabilities from intercepting every write through the hypervisor. The basic unit is a 2U box with two nodes, and Actifio is an authorized reseller of the IBM System Storage DS3500 disk for additional disk capacity. While it is touted as a way to eliminate backups, it does not offer any support at the fileor application level like TSM does. In the event of corruption of an individual file or database, you would roll-back the entire LUN from the latest FlashCopy or CDP checkpoint, and then re-apply logs or updates as needed. There is an optional agent that runs on the host OS, which is intended to orchestrate the copies, which might be similar to the way IBM's FlashCopy Manager interfaces with DB2, Oracle, SAP and other applications to suspend write during copy operations. Meanwhile, ProtecTIER is intended as a backup repository. It can receive backups of files, databases, from backup software, over a variety of traditional interfaces - VTL, OST and CIFS. Both TSM and ProtecTIER offer deduplication, which you can use these products individually, or in combination, to manage backups, space management and archives at the file level. See their website (http://www.actifio.com/) for more details. -- Tony

Thu May 31, 2012 11:48 PM

Originally posted by: TonyPearson


Steve, I will be meeting with folks from Actifio next week in Orlando for IBM Edge conference, so let me discuss with them before I respond. -- Tony

Thu May 31, 2012 11:18 PM

Originally posted by: SteveJackson


Please provide your view on Actifio and how ProtecTier and TSM compare.

Wed June 04, 2008 10:33 PM

Trackback to IBM Eye:http://www.ibmeye.com/what-makes-diligent-different/

Wed June 04, 2008 09:12 AM

Tony, I liked the way you answered precisely to the questions, as opposed to others who simply cut and paste marketing material. In my opinion, the most useful place for deduplicaiton is in the backup software. This approach will permit a richer set of features, for example integrating the ExaGrid scheme of deduplicate only the older backup versions, or allowing the storage administrator to choose whether to deduplication the tape copies. BartD

Mon June 02, 2008 12:12 AM

OSSG,Thanks. We have people scanning for these, but some are more difficult to spot than others! I'll let the BackupReview stay as it is relevant in this case.

Sun June 01, 2008 09:54 PM

Methinks it's time for you to start screening your comments for spam Tony ;)

Fri May 30, 2008 11:13 PM

On the subject of file backup, sharing and storage ...
Online backup is becoming common these days. It is estimated that 70-75% of all PC's will be connected to online backup services with in the next decade.
Thousands of online backup companies exist, from one guy operating in his apartment to fortune 500 companies.
Choosing the best online backup company will be very confusing and difficult. One website I find very helpful in making a decision to pick an online backup company is:
http://www.BackupReview.info
This site lists more than 400 online backup companies in its directory and ranks the top 25 on a monthly basis.