I had the opportunity to attend in person and present on the latest in tape technology at the 16th Annual Flash Memory Summit (FMS) held in Santa Clara last week. That’s right, tape technology at a flash conference. My friends from the DNA Data Storage Alliance were there presenting too. So what gives?
By Guest Blogger, Dr. Shawn O. Brume Sc. D., IBM Tape Evangelist and Strategist
According to a study by McKinsey, the average lifespan of companies listed in Standard & Poor’s is less than 18 years! That means that tape technology is already in business almost 4 times longer than the average S&P will survive. Tape technology celebrated 70 years young on May 21st. Tape has been and continues to be the most transforming data storage technology in history.
In the 50’s it was the only viable technology for storing data generated by the few computers in existence. In the 60’s tape took the world to the moon and preserved the data for usage nearly 40 years later when it was retrieved to assist in modern space explorations. By the 70’s Tape was dominating storage, transforming the financial industry by providing the ability to access data on accounts with minimal human intervention. The 80’s and 90’s continued the transformation of data availability by performing transactional data storage for ATMs, but also was key in the investigation of the space shuttle Challenger disaster; an investigation enhanced as a result of the durability of tape even when submerged in saltwater.
Today tape lives in the data center, preserving Zettabytes of data. Data being preserved and utilized across nearly every industry, examples:
Healthcare – Data preserved on tape is being utilized to develop new predictive health services. Digital medical records can be retained for the life of patients and shared across organizations.
Financial – Online transaction retention ensures customers valuable financial data is protected in the eventuality of a cyber-attack. Mortgage loans are preserved without fear of tampering.
Cloud – Data stored in public clouds are growing at a 30% faster rate than traditional storage. Cloud providers rely on tape to provide data durability and low-cost storage subscriptions.
Tape’s popularity has often been driven by the low cost of storage, modern data storage requires so much more including cyber-resiliency, data durability and low carbon footprints that enable sustainable IT.
Cyber Resiliency – Tape is the only true airgap data storage solution available.
Data Durability – Tape has a native single copy durability of 11- Nines. This means the likelihood of a single bit failure is 1 in 100 Petabytes.
Sustainability – At scale tape technology is 96% lower carbon footprint than highly dense HDD storage (when comparing OCP Bryce canyon and IBM tape technology with 27PB of data).
If preserving data, in a cyber-resilient solution, at low cost, with relatively low carbon impact meets your business outcomes, then why wait? Clearly tape is here to stay and surging in usage across nearly every business use case.
Happy 70-years to an amazing technology!
For more information about technology since tape’s introduction, check out this post from my colleague Mike Doran.
Climate change and the effects of global warming have increasingly been in the spotlight as we emerge from the all-consuming COVID pandemic. Indeed, sustainability has become a strategic imperative for organizations across the globe.
Recognizing the magnitude of this issue in the energy-intensive IT industry and in data storage operations specifically, Fujifilm has endeavored to help raise awareness of the energy advantage of today’s modern and highly advanced tape solutions.
In recent whitepapers by Brad Johns Consulting, IDC, Horison Information Strategies, and others, you can read about the energy advantage of tape compared to alternative storage technologies like HDD. But does it actually help end-users meet their sustainability goals in real-world applications?
To answer this question, I recently hosted a virtual roundtable discussion entitled, “Is Tape Really Eco-Friendly?” The panelists included two end-users, Jason Adrian from Microsoft Azure and Vladimir Bahyl from CERN. To review his whitepaper findings, I invited Brad Johns, TCO and energy consumption expert. And to provide feedback from the broader market of end-users, I invited Shawn Brume from IBM to share his observations.
The roundtable kicked off with a brief recap of Brad John’s analysis where he finds that for long-term storage of inactive or cold data, tape consumes 87%less energy than equivalent amounts of hard disk drives, produces 87% less carbon emissions, and reduces TCO by 86%. When looking at the total product lifecycle from procurement of raw material to production, distribution, usage, and disposal, tape produces 95% less CO2 equivalents and produces 80% less electronic waste than hard disk drives.
Those are pretty compelling numbers! But are the end-users seeing that benefit?
Jason Adrian from Microsoft Azure weighed in with the following comment: “When you take the material savings and power savings, tape actually does offer quite a bit of advantage compared to other technologies that are on the market today.”
Vladimir Bahyl from CERN offered; “We have been using tape for over 50 years at CERN. We are fully aware of the possibility to have hard drives that spin down and this saves some power when not in use. However, this completely changes the workflow that we have in-house…and adds complexity. Our archive is not a super cold archive, it is actually an active archive and tape is a natural building block in this system.”
Shawn Brume from IBM observed; “You can bring the total CO2 down to .42 metric tons per year per petabyte with tape. Which for most customers is 2 to 4X better in the overall lifecycle than HDD and believe it or not, 2 to 4X better than flash/SSDs. Customers are seeing that tape represents significant sustainability value.”
As organizations and IT operations specifically seek to achieve their sustainability goals, strategically moving inactive, infrequently accessed, cool or cold data to tape can have substantial environmental benefits.
The newly released whitepaper from IT analyst firm ESG (Enterprise Strategy Group), sponsored by IBM and Fujifilm, entitled, “How Tape Technology Delivers Value in Modern Data-driven Businesses,” focuses on exciting, new advances in tape technology that are now positioning tape for a critical role in effective data protection and retention in the age of zettabyte (ZB) storage. That’s right “zettabyte storage!”
The whitepaper cites the need to store 17 ZB of persistent data by 2025. This includes “cold data” stored long-term and rarely accessed that is estimated to account for 80% of all data stored today. Just one ZB is a tremendous amount of data equal to one million petabytes that would need 55 million 18 TB hard drives or 55 million 18 TB LTO-9 tapes to store. Just like the crew in the movie Jaws needed a bigger boat, the IT industry is going to need higher capacity SSDs, HDDs, and higher density tape cartridges! On the tape front, help is on the way as demonstrated by IBM and Fujifilm in the form of a potential 580 TB capacity tape cartridge. Additional highlights from ESG’s whitepaper are below.
New Tape Technology IBM and Fujifilm set a new areal density record of 317 Gb/sq. inch on linear magnetic tape translating to a potential cartridge capacity of 580 TB native featuring a new magnetic particle called Strontium Ferrite (SrFe) with the ability to deliver capacities that extend well beyond disk, LTO, and enterprise tape roadmaps. SrFe magnetic particles are 60% smaller than the current defacto standard Barium Ferrite magnetic particles yet exhibit even better magnetic signal strength and archival life. On the hardware front, the IBM team has developed tape head enhancements and servo technologies to leverage even narrower data tracks to contribute to the increase in capacity.
The Case for Tape at Hyperscalers and Others Hyperscale data centers are major new consumers of tape technologies due to their need to manage massive data volumes while controlling costs. Tape is allowing hyperscalers including cloud service providers to achieve business objectives by providing data protection for critical assets, archival capabilities, easy capacity scaling, the lowest TCO, high reliability, fast throughput, low power consumption, and air gap protection. But tape also makes sense for small to large enterprise data centers facing the same data growth challenges including the need to scale their environments while keeping their costs down.
Data Protection, Archive, Resiliency, Intelligent Data Management According to an ESG survey revealed in the whitepaper, tape users identified reliability, cybersecurity, long archival life, low cost, efficiency, flexibility, and capacity as top attributes in tape usage today and favor tape for its long-term value. Data is growing relentlessly with longer retention periods as the value of data is increasing thanks to the ability to apply advanced analytics to derive a competitive advantage. Data is often kept for longer periods to meet compliance, regulatory, and for corporate governance reasons. Tape is also playing a role in cybercrime prevention with WORM, encryption, and air gap capabilities. Intelligent data management software, typical in today’s active archive environments, automatically moves data from expensive, energy-intensive tiers of storage to more economical and energy-efficient tiers based on user-defined policies.
ESG concludes that tape is the strategic answer to the many challenges facing data storage managers including the growing amount of data as well as TCO, cybersecurity, scalability, reliability, energy efficiency, and more. IBM and Fujifilm’s technology demonstration ensures the continuing role of tape as data requirements grow in the future and higher capacity media is required for cost control with the benefit of CO2 reductions among others. Tape is a powerful solution for organizations that adopt it now!
In mid-December 2020, Fujifilm issued a press release to announce that, together with IBM Research, they had successfully achieved a record areal density of 317 Gbpsi (billion bits per square inch) on next-generation magnetic tape coated with next-generation Strontium Ferrite (SrFe) magnetic particles. This areal density achievement would yield an amazing native storage capacity of 580TB on a standard-sized data cartridge. That’s almost 50 times more capacity than what we have now with an LTO-8 tape based on Barium Ferrite (BaFe) at 12TB native.
Shortly after the news came out, I was on a call with a member of our sales team discussing the announcement and he asked me when the 580TB cartridge would be available and if there was any pricing information available yet? He was also curious about transfer speed performance. I had to admit that those details are still TBD, so he asked me “what are the 3 big takeaways” from the release? So let’s dive into what those takeaways are.
Tape has no fundamental technology roadblocks
To understand the magnitude of tape areal density being able to reach 317 Gbpsi, we have to understand just how small that is in comparison to HDD technology. Current HDD areal density is already at or above 1,000 Gbpsi while achieving 16TB to 20TB per drive on as many as nine disk platters. This level of areal density is approaching what is known as the “superparamagnetic limitation,” where the magnetic particle is so small that it starts to flip back and forth between positive and negative charge. Not ideal for long-term data preservation.
So to address this, HDD manufacturers have employed things like helium-filled drives to allow for closer spacing between disk platters that allow for more space for more platters, and therefore more capacity. HDD manufacturers are also increasing capacity with new techniques for recording involving heat (HAMR) or microwaves (MAMR) and other techniques. As a result HDD capacities are expected to reach up to 50TB within the next five years or so. The reason tape can potentially reach dramatically higher capacities has to do with the fact that a tape cartridge contains over 1,000 meters of half-inch-wide tape, and, therefore, has far greater surface area than a stack of even eight or nine 3.5-inch disk platters.
But let’s also look at track density in addition to areal density. Think about the diameter of a single strand of human hair which is typically 100 microns wide. If a single data track on HDD is 50 nanometers wide, you are looking at 2,000 data tracks for HDD on the equivalent width of a single strand of human hair! For tape, with a track width of approximately 1,200 nanometers, you are looking at just 84 data tracks. But this is actually a positive for tape technology because it shows that tape has a lot of headroom in both areal density and track density, and that will lead to higher capacities and help to maintain a low TCO for tape.
But let me make it clear that this is not about HDD vs. tape. We are now in the zettabyte age having shipped just over an impressive one zettabyte (1,000 exabytes) of new storage capacity into the global market in 2019 of all media types. According to IDC, that number will balloon to a staggering 7.5 ZB by 2025. We will need a lot of HDDs and a lot of tape (and flash for that matter) to store 7.5 ZB!
The past 10 years have been marked by explosive data growth and demand for storage. Meanwhile, the tape industry has experienced a renaissance thanks to significant advancements in capacity, reliability, performance, and functionality that have led to new applications and key industry adoption. Here’s a look at some of the key milestones.
In terms of capacity, the decade started for LTO with LTO-5 at 1.5 TB native capacity and culminated most recently with LTO-8 at 12.0 TB and LTO-9 soon to be delivered at 18.0 TB.
Enterprise tape formats started the decade at 1.0 TB native and are currently at 20.0 TB native.
Barium Ferrite magnetic particles became a key enabler for multi-terabyte tapes and were demonstrated by IBM and Fujifilm in 2015 to have the potential to achieve 220 TB on a single tape cartridge. This signaled that tape technology had no fundamental areal density limitations for the foreseeable future.
By the end of the decade, IBM and Fujifilm demonstrated the ability to achieve a record areal density of 317 GBPSI using the next generation of magnetic particles, Strontium Ferrite, with a potential cartridge capacity of 580 TB.
Reliability and Performance
During the decade, tape achieved the highest reliability rating as measured by Bit Error Rate at 1 x 1019, even better than enterprise HDD at 1 x 1016.
Data transfer rates for tape also improved from 140 MB/sec. in 2010 to an impressive 400 MB/sec.
LTFS provided an open tape file system with media partitions for faster “disk-like” access and ease of interchangeability, making LTO a de facto standard in the Media & Entertainment industry.
New Applications and Key Industry Adoption
Storing objects on tape became a reality with object archive software solutions offering S3 compatibility, objects can now move to and from tape libraries in their native object format.
The concept of active archiving grew in popularity with tape as a key component complementing flash, HDD and cloud for cost-effectively maintaining online archives.
Tape was recognized for its ease of removability and portability, providing air gap protection in the escalating war against cybercrime.
Major U.S. hyperscalers began to rely on tape during the decade for both back-up and deep archive applications. In one well-publicized example, Google restored a February 2011 Gmail outage from its tape backups. Microsoft adopted tape for Azure later in the decade. Tape became firmly established as a competitive advantage for these and other hyper scalers based on its scalability, long archival life, lowest TCO, low energy consumption, and air gap security.
With this steady technological advancement over the last decade, tape has been recognized for its complementary value to flash, HDD and cloud in tiered storage strategies for managing data in the zettabyte age.
Ransomware continues to threaten the security of enterprise IT infrastructures. In this Fujifilm Summit video, storage analyst George Crump talks to IBM’s Chris Bontempo about how artificial intelligence and machine learning are helping improve cybersecurity by identifying and stopping potential threats.
Ever wonder if you are getting the best deal on your data storage? Understanding the total cost of ownership (TCO) is critically important to any data storage purchase decision.
Today we introduced our new TCO Calculator, an updated version of our online tool that helps IT professionals assess and compare TCO for automated tape storage, disk-based storage, and cloud-based archive storage. The new TCO Calculator raises the maximum user storage baseline from 10PB to 100PB, integrating the IBM TS4500 enterprise library using LTO-8 drives and media for initial capacities over 10PB. Amazon S3 Glacier Deep Archive and bulk retrieval service is now also included in cloud storage cost comparisons.
After entering data into the TCO Calculator, users can download a customizable results report which includes an executive summary, key cost assumptions, and TCO by cost category and type (e.g., energy costs, offsite costs, service fees, labor, bandwidth, etc.).
Find out how you can start saving on your data storage costs now. Access the free TCO Calculator here.
Usage of Cookies