IBM Destination Z - Group home

A Renaissance for Tape

By Destination Z posted Mon December 23, 2019 03:33 PM


After decades of experience with tape, IT organizations continue to struggle with the technology. The latest generation of IT professionals, raised on virtual tape, are unfamiliar with physical tape.

“Tape gets a bad rap today,” says Jon Toigo, principal, Toigo Partners International, at the IBM Edge2014 conference in May. And with the advent of multi-stream backup jobs, that bad rap just gets worse. But, “it’s not tape’s fault,” he insists.

To the contrary, tape innovations are leading to a renaissance for tape, especially among large enterprises who face large, complex backup challenges. For example, the Linear Tape File System (LTFS) has been the source of a series of innovations in recent years that have increased the efficiency of tape and bolstered the economics of tape over both virtual tape library and cloud-based backup.

Cloud backup was the most recent alternative to tape backup that did not go as planned. At one point, CIO Magazine declared 60 percent (of respondents to its backup survey) will use clouds for backup and archive in three years. Toigo asks, “Remember Nirvanix?” Nirvanix, a cloud backup provider, announced it was shutting Sept. 16, 2013, and gave its customers 14 days to move hundreds of terabytes or even petabytes of data. Consider the sheer magnitude of the data volume challenge they faced.

“Even with the fastest network pipe you can get, an OC192 pipe, you can only move 10 TB every 2.25 hours,” explains Toigo. “Those with tens of petabytes were out of luck, even with OC192 WAN connections.”

Still, companies continue betting on unproven cloud providers for the backup of their mission-critical data. Don’t bet Nirvanix will be the only one to fail.

Tape brings a long-proven track record dating back to 1964, and tape technology has steadily advanced to the point where it can now pack 100 billion GB per square inch on a tape.

Continued Innovation

Not only is tape not going away, notes Toigo, it will continue to get better. This May, Sony announced the ability to pack 185 TB on a single tape. This is accomplished by producing a recording density of 146 GB per square inch.

The Sony achievement, working with IBM, took a combination of technology advances. It included new ways to coat the magnetic tape and to pack more tape on the cartridge.

Don’t expect to find this tape technology in enterprise storage products right away. But it is being readied for commercial use. Toigo’s key takeaway: Major advances in tape storage are happening right now and aren’t likely to stop.


To Toigo, LTFS is the most important tape technology innovation. The innovation lies in how it leverages tape partitioning in such a way that you can write data on tape and read through to individual files. In the past, you had to restore the entire tape to get to individual files. Now LTFS works with files and tapes and creates stubs left on the server. The stub lets the server respond to the data request from cache immediately while pulling the rest of the file off the tape.

Toigo further praised LTFS for its simple data layout, the ability to show the listing of files on the tape, to modify the file name on the directory, and for its ability to work with both the servers and tape. Essentially free and relatively easy to implement, Toigo expects LTFS to soon be developed as a standard by the Storage Networking Industry Association.
“LTSF is the underpinning of the tape renaissance,” he declares.

Tape Economics

It will be the favorable economics of tape, not LTFS, that becomes tape’s ace at the storage poker table, especially when combined with multi-tier hierarchical storage management (HSM). With annual storage capacity growth, as calculated by leading IT research firms IDC and Gartner, hitting 300 to 650 percent a year, enterprises will have to confront unsustainable cost increases without tape and HSM.

These recent triple-digit growth estimates result from the firms’ various recalculations of growth due to the demands of server virtualization on storage, related replication, and poor data and infrastructure management. Other contributors include multiple mirrors and bad buying decisions.

Toigo traced today’s poor storage management back to the movement away from HSM. Tier 0 (SSD, Flash) costs $50 to $100 per GB. Tier 1 (high performance disk) costs $7 to $20 per GB. Tier 2 (midrange disk) runs $1 to $8 per GB, and Tier 3 (tape) runs $0.20 to $2 per GB. Today, tiers 1 and 2 constitute 30 to 55 percent of an enterprise’s data while Tier 3 handles about 40 percent.

Fred Moore, founder, Horison Information Strategies, calculated the cost of 100 TB of storage using only tiers 1 and 2 at $765,000. If you use tiers 1 to 3, the cost would drop to $359,250. Add in some Tier 0 and the cost bounces up to $482,250.

When Toigo applied Moore’s cost projections to IDC and Gartner’s latest volume estimates, he concluded: “Nobody can afford this.”

The solution has to come from the combination of tape’s better economics, HSM and tape innovations.

Alan Radding is a Newton, Mass.-based freelance writer specializing in business and technology. Over the years his writing has appeared in a wide range of publications including the New York Times, CFO Magazine, CIO Magazine and Information Week. He can be reached through his website,