Blog

How Tape Technology Delivers Value in Modern Data-driven Businesses…in the Age of Zettabyte Storage

Reading Time: 3 minutes

October 27, 2021

By Rich Gadomski, Head of Tape Evangelism

The newly released whitepaper from IT analyst firm ESG (Enterprise Strategy Group), sponsored by IBM and Fujifilm, entitled, “How Tape Technology Delivers Value in Modern Data-driven Businesses,” focuses on exciting, new advances in tape technology that are now positioning tape for a critical role in effective data protection and retention in the age of zettabyte (ZB) storage. That’s right “zettabyte storage!”

The whitepaper cites the need to store 17 ZB of persistent data by 2025. This includes “cold data” stored long-term and rarely accessed that is estimated to account for 80% of all data stored today. Just one ZB is a tremendous amount of data equal to one million petabytes that would need 55 million 18 TB hard drives or 55 million 18 TB LTO-9 tapes to store. Just like the crew in the movie Jaws needed a bigger boat, the IT industry is going to need higher capacity SSDs, HDDs, and higher density tape cartridges! On the tape front, help is on the way as demonstrated by IBM and Fujifilm in the form of a potential 580 TB capacity tape cartridge. Additional highlights from ESG’s whitepaper are below.

New Tape Technology
IBM and Fujifilm set a new areal density record of 317 Gb/sq. inch on linear magnetic tape translating to a potential cartridge capacity of 580 TB native featuring a new magnetic particle called Strontium Ferrite (SrFe) with the ability to deliver capacities that extend well beyond disk, LTO, and enterprise tape roadmaps. SrFe magnetic particles are 60% smaller than the current defacto standard Barium Ferrite magnetic particles yet exhibit even better magnetic signal strength and archival life. On the hardware front, the IBM team has developed tape head enhancements and servo technologies to leverage even narrower data tracks to contribute to the increase in capacity.

The Case for Tape at Hyperscalers and Others
Hyperscale data centers are major new consumers of tape technologies due to their need to manage massive data volumes while controlling costs. Tape is allowing hyperscalers including cloud service providers to achieve business objectives by providing data protection for critical assets, archival capabilities, easy capacity scaling, the lowest TCO, high reliability, fast throughput, low power consumption, and air gap protection. But tape also makes sense for small to large enterprise data centers facing the same data growth challenges including the need to scale their environments while keeping their costs down.

Data Protection, Archive, Resiliency, Intelligent Data Management
According to an ESG survey revealed in the whitepaper, tape users identified reliability, cybersecurity, long archival life, low cost, efficiency, flexibility, and capacity as top attributes in tape usage today and favor tape for its long-term value. Data is growing relentlessly with longer retention periods as the value of data is increasing thanks to the ability to apply advanced analytics to derive a competitive advantage. Data is often kept for longer periods to meet compliance, regulatory, and for corporate governance reasons. Tape is also playing a role in cybercrime prevention with WORM, encryption, and air gap capabilities. Intelligent data management software, typical in today’s active archive environments, automatically moves data from expensive, energy-intensive tiers of storage to more economical and energy-efficient tiers based on user-defined policies.

ESG concludes that tape is the strategic answer to the many challenges facing data storage managers including the growing amount of data as well as TCO, cybersecurity, scalability, reliability, energy efficiency, and more. IBM and Fujifilm’s technology demonstration ensures the continuing role of tape as data requirements grow in the future and higher capacity media is required for cost control with the benefit of CO2 reductions among others. Tape is a powerful solution for organizations that adopt it now!

To read the full ESG whitepaper, click here.

 

 

 

 

 

 

 

Read More

Managing the Archival Upheaval

Reading Time: < 1 minute

October 7, 2020

Relentless digital data growth is inevitable as data has become critical to all aspects of human life over the course of the past 30 years and it promises to play a much greater role over the next 30 years. Much of this data will be stored forever mandating the emergence of a more intelligent and highly secure long-term storage infrastructure. Data retention requirements vary widely based on the type of data, but archival data is rapidly piling up everywhere. Digital archiving is now a key strategy for larger enterprises and has become a required discipline for hyperscale data centers.

Many data types are being stored indefinitely anticipating that its potential value will eventually be unlocked. Industry surveys indicate nearly 60% of businesses plan to retain data in some digital format 50 years or more and much of this data will never be modified or deleted. For many organizations, facing terabytes, petabytes and potentially exabytes of archive data for the first time can force the redesign of their entire storage strategy and infrastructure. As businesses, governments, societies, and individuals worldwide increase their dependence on data, data preservation and archiving has become a critical IT practice. Fortunately, the required technologies are now available to manage the archival upheaval.

For more information, check out this Horison Information Strategies White Paper “Managing the Archival Upheaval.”

Read More

THE ASCENT TO HYPERSCALE – Part 3

Reading Time: 2 minutes

July 15, 2020

By Rich Gadomski, Tape Evangelist at Fujifilm Recording Media, U.S.A., Inc.

Part 3: THE VALUE OF TAPE RISES RAPIDLY AS HYPERSCALE DATA CENTERS GROW

In Part 2 of this series, we looked at some of the key characteristics of hyperscale data centers. Now, we’ll explore how tape plays a role.

Today HSDCs are leveraging the many advantages of tape technology solutions to manage massive data growth and long-term retention challenges. Keep in mind most digital data doesn’t need to be immediately accessible and can optimally and indefinitely reside on tape subsystems. Some data requires secure, long-term storage solutions for regulatory reasons or due to the potential value that the data can provide through content analysis at a later date. Advanced tape architectures allow HSDCs to achieve business objectives by providing data protection for critical assets, backup, recovery, archive, easy capacity scaling, the lowest TCO, highest reliability, the fastest throughput, and cybersecurity protection via the air gap. These benefits are expected to increase for tape in the future.

Fighting the cybercrime epidemic has become a major problem for most data centers and HSDCs are no exception. Tape can play a key role in its prevention and provides WORM (Write-Once-Read-Many) and encryption capabilities providing a secure storage medium for compliance, legal and any valuable files. Tape, as an “Air Gap” solution, has gained momentum providing an electronically disconnected copy of data that prevents cybercrime disasters from attacking data stored on tape. Disk systems remaining online 7×24 are the primary target as they are always vulnerable to a cybercrime attack.

HSDCs are taking advantage of tiered storage by integrating high-performance SSDs, HDD arrays and automated tape libraries. Even though HSDCs are struggling with the exploding growth of disk farms which are devouring IT budgets and overcrowding data centers, many continue to maintain expensive disks often half full of data which often has little or no activity for several years. Obviously, few data centers can afford to sustain this degree of inefficiency. The greatest benefits of tiered storage are achieved when tape is used as its scalability, lower price and lower TCO plays an increasing role as the size of the storage environment increases. For the hyperscale world “adding disk is tactical – adding tape is strategic.”

For more information on this topic, check out our white paper: The Ascent to Hyperscale.

 

Read More

THE ASCENT TO HYPERSCALE – Part 2

Reading Time: 2 minutes

July 1, 2020

By Rich Gadomski, Tape Evangelist at Fujifilm Recording Media, U.S.A., Inc.

Part 2: CHARACTERISTICS OF THE HYPERSCALE DATA CENTER

In Part 1 of this series, we looked explored the definition of hyperscale data centers. Now, we’ll take a look at some of the key characteristics.

HSDCs don’t publicly share an abundance of information about their infrastructure. For companies who will operate HSDCs, the cost may be the major barrier to entry, but ultimately it isn’t the biggest issue – automation is. HSDCs must focus heavily on automating and self-healing environments by using AI and ML whenever possible to overcome inevitable and unexpected failures and delays. Unlike many enterprise data centers, which rely on a large full-time staff across a range of disciplines, HSDCs employ fewer tech experts because they have used technology to automate so much of the overall management process. HSDC characteristics include:

  • Small footprint, dense racks–HSDCs squeeze servers, SSDs (Solid State Disks) and HDDs (Hard Disk Drives) directly into the rack itself, as opposed to separate SANs or DAS to achieve the smallest possible footprint (heavy use of racks). HSDC racks are typically larger than standard 19” racks.
  • Automation–Hyperscale storage tends to be software- defined and is benefitting from AI delivering a higher degree of automation and self-healing minimizing direct human involvement. AI will support automated data migration between tiers to further optimize storage assets.
  • Users–The HSDC typically serves millions of users with only a few applications, whereas in a conventional enterprise there are fewer users but many more applications.
  • Virtualization–The facilities also implement very high degrees of virtualization, with as many operating system images running on each physical server as possible.
  • Tape storage adoption–Automated tape libraries are on the rise to complement SSDs and HDDs to easily scale capacity, manage and contain out of control data growth, store archival and unstructured data, significantly lower infrastructure and energy costs, and provide hacker-proof cybercrime security via the tape air gap.
  • Fast scaling bulk storage–HSDCs require fast, easy scaling storage capacity. One petabyte using 15 TB disk drives requires 67 drives and one exabyte requires 66,700 15 TB drives. Tape easily scales capacity by adding media, disk scales by adding drives.
  • Minimal feature set–Hyperscale storage has a minimal, stripped-down feature set and may even lack redundancy as the goal is to maximize storage space and minimize cost.
  • Energy challenges–High power consumption and increasing carbon emissions has forced HSDCs to develop new energy sources to reduce and more effectively manage energy expenses.

In Part 3 of this series, we’ll take a look at the how the value of tape is rapidly rising as hyperscale data centers grow. For more information on this topic, download our white paper: The Ascent to Hyperscale.

Read More

The Ascent to Hyperscale

Reading Time: 2 minutes

June 12, 2020

By Rich Gadomski, Tape Evangelist at Fujifilm Recording Media, U.S.A., Inc.

Part 1: What Are Hyperscale Data Centers?

Hyperscale data centers have spread across the globe to meet unprecedented data storage requirements. In this three-part blog series, we take a look at how the industry is preparing for the next wave of hyperscale storage challenges.

The term “hyper” means extreme or excess. While there isn’t a single, comprehensive definition for HSDCs, they are significantly larger facilities than a typical enterprise data center. The Synergy Research Group Report indicated there were 390 hyperscale data centersworldwideattheendof2017. An overwhelming majority of those facilities, 44%are in the US with China being a distant second with 8%. Currently the world’s largest data center facility has 1.1 million square feet. To put this into perspective the standard size for a professional soccer field is 60,000 square feet, the equivalent to about 18.3 soccer fields. Imagine needing binoculars to look out over an endless array of computer equipment in a single facility. Imagine paying the energy bill!

Hyperscale refers to a computer architecture that massively scales compute power, memory, a high-speed networking infrastructure, and storage resources typically serving millions of users with relatively few applications. While most enterprises can rely on out-of- the-box infrastructures from vendors, hyperscale companies must personalize nearly every aspect of their environment. A HSDC architecture is typically made up of tens of thousands of small, inexpensive, commodity component servers or nodes, providing massive compute, storage and networking capabilities. HSDCs are implementing Artificial Intelligence (AI), and Machine Learning (ML) to help manage the load and are exploiting the storage hierarchy including heavy tape usage for backup, archive, active archive and disaster recovery applications.

In Part 2 of this series, we’ll take a look at the characteristics of the hyperscale data center. For more information on this topic, download our white paper: The Ascent to Hyperscale.

Read More

LET’S DISCUSS YOUR NEEDS

We can help you reduce cost, decrease vendor lock-in, and increase productivity of storage staff while ensuring accessibility and longevity of data.

Contact Us >