IBM Storage Defender

IBM Storage Defender

Early threat detection and secure data recovery

 View Only

Day2 Data Protection Strategies

By Tony Pearson posted Fri December 04, 2009 12:18 PM

  

Originally posted by: TonyPearson


Continuing my coverage of the Data Center Conference, Dec 1-4, 2009 here in Las Vegas, this post focused on data protection strategies.

Two analysts co-presented this session which provided an overview of various data protection techniques. A quick survey of the audience found that 27 percent have only a single data center, 13 percent have load sharing of their mission critical applications across multiple data centers, and the rest use a failover approach to either development/test resources, standby resources or an outsourced facility.

There are basically five ways to replicate data to secondary locations:

  1. Array-based replication. Many high-end disk arrays offer this feature. IBM's DS8000 and XIV both have synchronous and asynchronous mirroring. Data Deduplication can help in this regard to reduce the amount of data transmitted across locations.
  2. NAS-based replication. I consider this just another variant of the first, but this can be file-based instead of block-based, and can often be done over the public internet rather than dark fiber.
  3. Network-based replication. This is the manner that IBM SAN Volume Controller, EMC RecoverPoint, and others can replicate. The analysts liked this approach as it was storage vendor-independent.
  4. Host-based replication. This is often done by the host's Operating System, such as through a Logical Volume Manager (LVM) component.
  5. Application/Database replication. There are a variety of techniques, including log shipping of transactions, SQL replication, and active/active application-specific implementations.
The analysts felt that "DR Testing" has become a lost art. People are just not doing it as often as they should, or not doing it properly, resulting in surprises when a real disaster strikes.

A question came up about the confusion between "Disaster Recovery Tiers" and Uptime Institute's "Data Center Facilities Tiers". I agree this is confusing. Many clients call their most mission critical applications as Tier 1, less critical as Tier 2, and least critical as Tier 3. In 1983, IBM User Group GUIDE came up with "Business Continuity Tiers" where Tier 1 was the slowest recovery from manual tape, and Tier 7 was the fastest recovery with a completely automated site, network, server and storage failover. However, for Data Center facility tiers, Uptime has the simplest least available (99.3 percent uptime) data center as Tier 1, and the most advanced, redundant, highest available (99.995 percent) data center as Tier 4. This just goes to show that when one person starts using "Tier 1" or "Tier 4" terminology, it can be misinterpreted by others.

technorati tags: , ,

0 comments
3 views

Permalink