FUJIFILM INSIGHTS BLOG

Data Storage

HDDs Losing Ground to SSD and Tape

Reading Time: 3 minutes

By: Fred Moore, President
Horison Information Strategies
www.horison.com

Introduction

The traditional storage market is shifting as applications are more effectively exploiting the tiered storage hierarchy to better align availability requirements, service levels, and data protection mandates with the optimal infrastructure cost. Clearly HDDs remain and for the foreseeable future will continue to be the work-horse of the storage hierarchy. They are steadily losing market share for response time critical, high performance applications to the growing deployment of SSD technology while losing many lower activity, archival and resilience applications to significantly improved modern tape technology. The pressure is on the HDD industry and is illustrated by worldwide HDD shipments (data from Statista), which peaked with 651,300 million in 2010 and dropped 35% to 403,710 million in 2017. HDD shipments are predicted to fall to 341,950 million in 2020. Data which in prior years was often stored on HDDs without much thought to storage optimization is now taking up residence elsewhere. As storage pools get larger, the need to optimize storage by getting the right data in the right place also gets larger.

What’s Behind the Shift?

SSDs mean high performance. SSDs have successfully addressed much of the high-performance storage market that was basically the exclusive domain of HDDs. Within the next 12-18 months, solid-state flash arrays currently using 2D NAND are projected to improve in performance by a factor of 10x and double in density and cost-effectiveness as 3D NAND and 3D XPoint technology begins to emerge. This technological progression will significantly change the dynamics of the performance centric storage market. Compared to HDDs, SSDs have higher data-transfer rates, faster access times, better reliability, much lower latency with lower energy consumption. For most users, the consistent and high speed at which SSDs can read and write data and meet service levels is the key attraction. Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on R/W times, resulting in read times far faster than any HDD.

HDDs can handle every data type and have carried the most of load for the storage industry for years, however future challenges for HDDs are mounting. HDDs are increasing in capacity but not in performance as the IOPS (I/Os per Second) for HDDs have basically leveled off. The potential for more concurrently active data sets or files increases as HDD capacity grows and the increased contention for the single actuator arm causes erratic response time delays. Excessive RAID rebuild times are a growing concern and it can now take several days to rebuild a failed HDD in a RAID array degrading performance during the lengthy rebuild period. As HDD capacities continue to increase, total time required for the RAID rebuilding process will become prohibitive for many IT organizations and higher capacity HDDs could force a replacement for traditional RAID architecture implementations. HDD areal density is currently progressing at ~16% annually, about half the rate of tape technology. HDD capacity is often increased by adding more platters as the available surface recording are is squeezed as areal density increases. HDDs have a much higher TCO and use considerably more energy than tape or SSD.

For tape, significant technology improvements over the past 10 years have resulted in a tape renaissance. These changes enable tape to provide the lowest acquisition cost and TCO, the highest capacity, fastest data transfer rates, lowest energy consumption and most reliable storage medium available. Tape reliability has surpassed that of HDDs by three orders of magnitude. Over the last 10 years, LTO tape has increased capacity 1,400%, performance 200%, and reliability 9,900% while modern tape media life now exceeds 30 years. Tape data rates are now nearly 2x faster than HDDs and are projected to be 5x faster by 2025. New features like the Active Archive, RAIT and RAO add significant performance and access time improvements beyond traditional tape. Using tape for cloud archives, rather than HDDs, greatly reduces cloud TCO and creates a “green cloud”. The steady innovation, compelling value proposition and new architectural developments demonstrate tape technology is not sitting still and the renaissance is expected to continue indefinitely.

Summary

A fundamental shift in the storage landscape is well underway as high-performance data moves from HDDs onto flash SSD while lower activity, resiliency and archive data migrate from HDD to modern tape. For the foreseeable future, HDDs will remain the home for many primary storage, mission- critical data along with the highest availability applications, but HDD shipment growth rates have declined nearly 35% since its highpoint in 2013 and projections indicate no signs of ending. As SSDs and tape continue to show rapid improvements and re-balance

the traditional tiered storage hierarchy, HDDs will continue to feel more pressure. The storage squeeze play is underway, and HDDs are caught in the middle.

Read More

Don’t Be Blindsided By Invisible Storage Costs

Reading Time: < 1 minute

In this video, Brad Johns provides the real cost of ownership of your data storage over 10 years and explains why tape is the most affordable option for long-term data storage. Although many companies use a variety of different storage platforms, tape is the most practical and the most affordable for backup and archive.

For one petabyte of raw, non-compressible data, the cost savings versus high capacity disk is about 74% over the course of 10 years; the savings increase to 84% when compared to the cloud.  Brad Johns crunched the numbers and tape is undeniably the cheapest option for long-term storage.

Find out how you can start saving on your data storage costs. Access the free TCO calculator here.

Read More

Why is Microsoft Azure Choosing Tape?

Reading Time: < 1 minute

Listen to Marvin McNett, Principal Developer Manager from Microsoft as he explains the reasons tape is being used today in the Microsoft data center for its archival storage tier. View the video here:

Read More

What is Redundant Arrays of Independent Tape (RAIT)?

Reading Time: < 1 minute

According to the Information Storage Industry Consortium, the total data rate for tape is improving by 22.5% MB/sec per year. One concept that is driving this capacity increase in the tape industry is RAIT (Redundant Arrays of Independent Tape). RAIT is ideal for large files that need massive amounts of throughput such as in a disaster recovery scenario where you need the ability to move your whole data center electronically to another location.

In this video, Fred Moore of Horison Information Strategies explains how RAIT works.

Read More

It’s Just a Matter of Time, as Storage Demands Rise

Reading Time: 2 minutes

Rich Gadomski
Vice President of Marketing
FUJIFILM Recording Media U.S.A., Inc

I recently returned from a speaking opportunity at the PRISM Conference held in Miami on May 8thand 9th where I spoke on the Role of Tape in Today’s Modern Offsite Storage Center. In addition to holding and protecting valuable data tape cartridges for archive, backup, and disaster recovery applications, offsite vaults also play a crucial role in providing an “air gap” against cyber criminals and their alarming malware and ransomware variants. Because of tape’s powerful value proposition, it provides this functionality particularly well. It’s easily portable, has the lowest total cost of ownership, is the most reliable storage medium today, and has long archival life and high capacity.

The audience, which included many regional data vault service providers from the U.S. and abroad, didn’t have to take my word on the value prop of tape. I backed it up with studies from leading IT research companies and articles from reliable publications such as the Wall Street Journal. I sprinkled in some news about tape usage from folks like Microsoft Azure. Finally, I detailed the bright future tape has based on its ability to continue to increase in areal density which will ensure increasing capacity and cost competitiveness without sacrificing performance, thanks in part to Fujifilm’s Barium Ferrite and Strontium Ferrite magnetic particle technology.

At the end of my presentation, during the Q&A, I got the following response and question: “Tape sounds great, how come we don’t see more tape volume flowing into our vaults?” One reason for this would be the increasing data densities of tape which would reduce unit volumes. Understandably this is not great for the vault service providers, but this is actually a great benefit for end users; they can store more data on fewer units. Another factor to consider is the ever-increasing popularity of cloud storage over say, the past five years. We have seen a move from on-premises, do-it-yourself storage to outsourced cloud services. This is especially true among startups and SMBs and specific verticals where the cloud can provide unique functionality such as compute and file sharing.

But as the world turns ever so slowly, so do market conditions. Now that data storage pros have gotten comfortable with what the cloud can do, they are also starting to understand some of the downsides such as high TCO associated with egress fees and bandwidth. Security concerns might be mounting too in light of escalating cybersecurity breaches.

So at some point, tape will make sense again for many of the folks who tried cloud, considering TCO, budget constraints and the need for air gap. It’s just a matter of time, as long as demand for storage keeps rising based on relentless data growth.  And so long as the hackers don’t quit on the highly profitable multi-trillion dollar business of cybercrime.

Read More

Whitehead Cracks the Code on Cost-Effective Storage

Reading Time: 3 minutes

Whitehead Cracks the Code on Cost-Effective Storage

Whitehead Institute is a world-renowned non-profit research institution dedicated to improving human health through basic biomedical research. By cultivating a deeply collaborative culture and enabling the pursuit of bold, creative inquiry, Whitehead fosters paradigm-shifting scientific achievement. For more than 30 years, Whitehead faculty have delivered breakthroughs that have transformed our understanding of biology and accelerated development of therapies for such diseases as Alzheimer’s, Parkinson’s, diabetes, and certain cancers.

The Challenge

The Whitehead Institute, based in Cambridge, Mass., takes on some of the most complex and important medical and scientific challenges ever presented to mankind. In the 33 years since its founding, it has become one of the world’s leading molecular biology and genetics research institutes, employing multiple National Medal of Science winners. In fact, the Whitehead Institute was a key contributor to the 13-year Human Genome Project, a groundbreaking study that unlocked an entirely new understanding of how humans react to viruses, bacteria and drug therapy.

Research at the Whitehead Institute generates an enormous amount of data. Genomic sequences and microscopy images alone can add up to multiple terabytes a week. Information is further extracted from the raw data using a computing cluster that leads to the creation of processed data files. This all translates into a unique set of challenges for the Institute’s IT team. Like the scientists they support, the IT team has had to address their challenges with innovative and experimental approaches.

“The scientists do everything from basic cellular process research to cancer and other diseases research,” said Paul McCabe, Senior Unix Systems Administrator and Data Center Specialist. “It varies widely, but the common denominator is that our research generates a huge amount of very valuable data.”

Due to the historical implications of their research, scientists at the Whitehead Institute constantly have to look back at previously collected data to forge ahead with their work.

“We tend to process data pretty heavily, and we have long-term data retention requirements,” said McCabe. “We not only store the data while it’s being actively processed by our researchers, but we also need to archive that data long after research papers are published in case the data behind the papers are ever challenged.”

As the Institute’s operations have become more dynamic and strenuous in nature, the legacy systems in place have had trouble keeping up with the increased workload and demand.

“Our organization had become a 24-hour endeavor, which was a challenge that was becoming more and more difficult to manage,” explained McCabe. “We were backing up for eight hours a day, duplicating for eight hours a day, and archiving the remaining eight hours. The equipment was being pushed to its limits, and if anything went wrong… we were simply out of hours.”

The Solution

As a result, McCabe and the IT team began researching high capacity data archiving alternatives that could meet their scalability, reliability and simplicity needs. At an IT tradeshow, the team was introduced to the Fujifilm Dternity, a data archiving system that combines the simplicity of disk and the economics of tape into a highly scalable, easy-to-manage solution.

“We also liked the way Fujifilm structures its licensing model in large bands, rather than the ‘by the terabyte’ model offered by other vendors. Overall, it matched very well with our requirements.”

Currently, the Whitehead Institute IT team is storing 171 TB of unique data on the Dternity NAS, with room to grow to more than 400 TB.

The Benefits

To date, the IT team has seen an overall decrease in administrative time associated with backing up and archiving research data due to the system’s ease of use and scalability. There has been some cost savings already, but as the amount of data in the Dternity grows, the cost savings grows with it. It is significantly cheaper to keep archive data on tape as opposed to disk. “Capacity and scalability were obviously very important to us, but Dternity provided so much more,” said McCabe. “Our backup team is thrilled with how easy the system is to manage and how it frees them up to focus on other tasks, but I would say the most noticeable benefit is the overall peace-of-mind the Dternity provides us. We’re dealing with critical data, and I never have to worry because it is fully protected, backed up and available when needed.”

Read More

The Active Archive Is Integral to Your Data Storage Game Plan 

Reading Time: < 1 minute

Organizations are quickly learning the value of analyzing vast amounts of previously untapped archival data. Industry studies suggest that only about 20% of all digital data is ever accessed or used again after it is stored, underscoring the archival challenge. The need to effectively store, search for and retrieve enormous volumes of archival content is fueling new advancements in archive solutions.

This Active Archive Alliance report describes the state of the archive market and the role that the active archive plays. View the full report here.

Read More

Tape Air Gap Provides Defense Against Cybercrime

Reading Time: < 1 minute

According to Juniper Research, cybercrime is expected to become a $2.1 trillion problem by 2019. Using tape-based, offline storage creates an “air gap” that can prevent hackers from accessing your data. In this video, Fred Moore, president of Horison Information Strategies, explains the benefits of tape storage for data security. 

Read More

Tape Storage Council Releases Annual Report on State of Tape Industry

Reading Time: < 1 minute

Tape isn’t just raising the bar, it is the bar. According to a new Tape Storage Council report, in the last 10 years, LTO tape has increased capacity 1,400%, performance 200%, and reliability 9,900%. In addition to tape’s continual capacity improvements, tape is improving access time and data rate (throughput) with active archive, RAIT, and RAO, and offers the storage industry’s fastest data rates.

Tape is serving multiple roles for the enormous hyper scale, Internet and cloud data centers as tape capacity can easily scale without adding more drives. Check out the new 2018 State of the Tape Industry report featuring current trends, use cases and technology innovations for tape storage: http://tapestorage.org

Read More

Helping MLB Network Simplify Their Massive Content Archive

Reading Time: 4 minutes

Storytelling is a central facet of society that may have changed formats over the years but will never become obsolete. In today’s digital world, broadcasters and television networks focus on creating relatable stories to connect with their audiences, and they can’t do that without a wealth of readily available content.

MLB Network is the source for baseball stories of all kinds, from live games to studio shows and feature programming. Launched on January 1st, 2009, MLB Network is growing fast, reaching more than 70 million households today, delivering the best of America’s national pastime, all the time.

The Challenge: Digital Content Storage and Management

“MLB Network’s goal is to bring baseball to our audience every night with the highest levels of production quality, focus and enthusiasm throughout the year,” said Tab Butler, Director of Post Production and Media Management at MLB Network. “To accomplish this, we need constant access to our archives and current live game content. We need all information from every game securely stored and easily accessible.”

With multiple recordings of every game, along with multiple audio sources, and pregame, post-game and isolated camera feeds, it is not uncommon for MLB Network to record more than 3,000 hours of content per week. That content is then categorized and cataloged for future use, using the Emmy Award-nominated media asset management DIAMOND System. When the baseball season comes to a close, MLB Network continues to deliver baseball news 24/7, with special programming about a team, player or other happenings in the sport. These individual projects require systematic archival that supports precise selection and instant access of specific files. The challenge is how to empower diverse departments to directly access their projects, without heavy IT support.

MLB Network Deployed StrongBox for Project-Based Workflow

As an early adopter, MLB Network deployed a custom-developed StrongBox to manage archived projects in late 2011. StrongBox is a vendor-neutral, fully portable data vault for long-term file retention. Functioning as standard network-attached storage (NAS), StrongBox employs Linear Tape File System (LTFS) media as the principal storage medium to save money and empower a file-system view of all archived content. An internal disk cache enables rapid file access. With drag-and-drop functionality, StrongBox makes accessing archived projects easy for MLB Network, delivering content on-demand to multiple, simultaneous users. Since MLB network is a 24/7 operation, the production staff uses a SAN-based Final Cut Pro (FCP) platform to develop programming that is updated throughout the season with the latest information. When these projects are ready for archival, the video, audio and revision files, along with their metadata, are stored in StrongBox. Having the ability to recall an archived show, and repackage it with current information, utilizing tape as the storage medium, reduces the storage costs for the archived content. “StrongBox has been natural for streamlining this type of project-based storage. Instead of keeping projects on spinning disk, we’re able to offload to StrongBox,” explained Butler. One of Butler’s key initiatives is finding ways to better automate the media management environment, allowing different departments to manage their own archival data instead of relying on his Media Management team to store and retrieve files. With StrongBox, editors have direct access to archived projects in real-time, without having to depend on a Media Management operator to retrieve files.

High-Capacity, Low-Cost LTFS Tape for Proxy

Butler said that for the 2009 baseball season, 25 terabytes of spinning disk storage was required for the video proxy data, and for the 2010 baseball season, 32 terabytes was required. Although this proxy information requires long-term storage, it is accessed very infrequently. Thus, the operational costs for keeping this much data on spinning disk become extremely expensive. Even though it is long-tail content, it cannot be taken offline.

Through the custom-built, DIAMOND System, MLB Network logs and categorizes HD recording by viewing the video using proxy video files, which are recorded in real time. The bulk of recorded HD Video content is stored within an LTO library, and is searched and accessed using DIAMOND and the Grass Valley Aurora systems. Thus, Butler is investigating ways to use tape to further drive down the costs of his long-tail proxy content which is currently on spinning disk.

“If I look down my future roadmap, my proxy environment is going to continue to grow year over year for the lifetime of the archive,” explained Butler. “Getting the proxy on LTO-5 media is much more cost-efficient for long-tail content.” With the introduction of LTFS, tape can be partitioned and indexed on a file level. This brings significant opportunity for media and broadcast companies. For MLB Network, LTFS brings the capability to efficiently maintain an accessible archive at the file level, while eliminating the heat generation, cooling requirements, spinning drives and other operational costs associated with disk storage.

“StrongBox is a very unique product, with flexibility that makes it functional in multiple use-cases for MLB Network,” Butler continued. Currently, the MLB Network video archive consists of 10 to 12 petabytes of stored HD content, and 275 terabytes of proxy content. A migration of that much data would be a significant undertaking. With the ability to scale up to 35 petabytes, StrongBox delivers low-cost, high-capacity storage with high-performance access that could provide a cornerstone in the foundation for MLB Network’s biggest business asset – its programming.

The Bottom Line

MLB Network delivers exciting, engaging baseball stories 24/7, 365 days a year. With a massive content archive that will only continue to grow, Tab Butler knows that LTFS tape is a cost-efficient and scalable way to manage MLB Network’s digital records. While editors constantly juggle multiple projects with demanding deadlines, StrongBox facilitates a project-based workflow, integrating with the editing environment to allow end users more direct access to their archived content. Ultimately, StrongBox helps MLB Network spend more time creating and delivering award-winning baseball stories and less time worrying about how to manage data.


MEET THE FASTEST LTFS NAS

Read More

LET’S DISCUSS YOUR NEEDS

We can help you reduce cost, decrease vendor lock-in, and increase productivity of storage staff while ensuring accessibility and longevity of data.

Contact Us >