FUJIFILM INSIGHTS BLOG

Data Storage

How Do You Get Renewables to Power Data Centers?

Reading Time: < 1 minute

By diversifying your renewable energy mix, you can achieve energy efficiency gains even with data centers which typically carry large power loads.  In this Fujifilm video, Craig Lewis, Executive Director of Clean Coalition talks about how tape storage allows us to do more work with more data storage using a lot less energy.  Watch it here:

 

 

Read More

How to Store a Zettabyte

Reading Time: < 1 minute

According to Aaron Ogus, partner development manager for Microsoft Azure Storage, storing a zettabyte of storage will be financially feasible in 2020. Data growth will always exceed expectations, and tape has a more credible road map and one that is easier to get to with not as much investment. Learn more in this video blog:

Read More

The Impact of GDPR on Your Data Management Strategy

Reading Time: 2 minutes

By Floyd Christofferson,
SVP of Products at Strongbox Data

It is no illusion that every time you turn around it seems there is another report of a high-profile hack of sensitive personal data, impacting hundreds of millions of people all over the world. The recent Equifax hack released personal financial data of over 143 million consumers, but that was not an isolated incident. In 2016 and 2017 so far there have been at least 26 major hacks around the world that have released personal data of more than 700 million people. These include hacks of telecommunication companies, financial institutions, government agencies, universities, shopping sites, and much more.

The hacks are not a new problem. But in a global economy with often conflicting political and economic priorities at stake, there has been no comprehensive approach to ensuring people have the right to protect and delete if they want, all of their personal data.

The European Union’s new GDPR (General Data Protection Regulation) went into effect in May 2018. Although GDPR is designed to protect European citizens, the rules and penalties apply to any company from any country who does business in Europe. And the penalties are significant, with companies at risk of being fined up to 4% of their global annual gross revenues or €20 million (whichever is greater) for failing to comply with strict right-to-be-forgotten and privacy protections for customer data.

As a result, there is a growing panic among businesses as they try to figure out how to solve this problem in time, and how to do so with existing data management and storage resources that are not designed for this task. And the concern is not only in Europe. Companies in the US and around the world who have customers in Europe are also scrambling to ensure they are in full compliance by the deadline. But according to Gartner, by the end of 2018 over 50% of companies affected by the GDPR worldwide will not be in full compliance with its requirements.

In this paper we offer an overview of the key provisions of GDPR that impact storage and data management for both structured and unstructured data. In subsequent technical briefs, we will go into more detail about specific technical solutions to help ensure your data environment is in compliance, even with your existing storage and data infrastructure.

Read More

What Exactly is Barium Ferrite?

Reading Time: 3 minutes

By: Ken Kajikawa

The marketplace is full of examples of unique manufacturing ingredients that make products special. McDonald’s has its special sauce. Kentucky Fried Chicken has its secret recipe. Bush’s Beans has a talking dog that won’t disclose how they make their baked beans. Well, at Fujifilm, we too have our secret sauce, it’s called Barium Ferrite and we’re happy to share our story.

What makes Fujifilm Ultrium LTO-6 and LTO-7 different from past generations of Fujifilm LTO media? The answer is Barium Ferrite, or for you chemistry geeks out there BaFe. Okay, so you are probably asking what does this mean for me? The answer lies in Barium Ferrite magnetic particles. These particles enable higher data density and superior performance. Barium Ferrite allows for LTO-6 and LTO-7 media (and future generations) to have the following extraordinary benefits:

1)   Higher Capacity:A HIGHER SIGNAL-TO-NOISE RATIO ENABLES

USE OF SMALLER PARTICLES RESULTING IN HIGHER CAPACITY

Fujifilm successfully developed a type of BaFe particulate tape with a signal-to-noise ratio that is four decibels higher than that of a commercially available LTO-5 tape at a very high linear density and with a thermal stability sufficient for long-term archiving over at least 30 years. This high recording performance and thermal stability were achieved by using a tape with a smooth surface and highly oriented fine magnetic particles made possible by our Nanocubic coating technology.

Metal Particles (MP Tape) require a protective passivation coating to prevent oxidation.  The passivation layer also limits the reduction of particle size that can be achieved.  BaFe particles are oxides so a passivation layer isn’t needed.  Smaller particles with better stability can be achieved with BaFe.

2)   Longer Archival Life: BARIUM FERRITE IS A CHEMICALLY STABLE

MATERIAL WITH NO MAGNETIC PROPERTY LOSS

Data is growing at an exponential rate and will continue growing for the foreseeable future. You need to manage and store this data without worrying about whether or not it is secure and you will be able to retrieve it at some point in the future. Using media based on Barium Ferrite assures that your data is stored on the most technologically advanced high density media available today.

Fujifilm believes that advanced BaFe particulate tape shows promise for use in future generations of magnetic particulate tape. We expect tape storage systems using BaFe particle media to continue to provide sufficient storage capacity at a low TCO for many years to come. And after that, there will be a new metal particle already under development by Fujifilm called Strontium Ferrite (SrFe) to ensure continuing areal density gains and to meet the demands of future tape roadmaps. But SrFe is a subject for another blog!Read more

Read More

HDDs Losing Ground to SSD and Tape

Reading Time: 3 minutes

By: Fred Moore, President
Horison Information Strategies
www.horison.com

Introduction

The traditional storage market is shifting as applications are more effectively exploiting the tiered storage hierarchy to better align availability requirements, service levels, and data protection mandates with the optimal infrastructure cost. Clearly HDDs remain and for the foreseeable future will continue to be the work-horse of the storage hierarchy. They are steadily losing market share for response time critical, high performance applications to the growing deployment of SSD technology while losing many lower activity, archival and resilience applications to significantly improved modern tape technology. The pressure is on the HDD industry and is illustrated by worldwide HDD shipments (data from Statista), which peaked with 651,300 million in 2010 and dropped 35% to 403,710 million in 2017. HDD shipments are predicted to fall to 341,950 million in 2020. Data which in prior years was often stored on HDDs without much thought to storage optimization is now taking up residence elsewhere. As storage pools get larger, the need to optimize storage by getting the right data in the right place also gets larger.

What’s Behind the Shift?

SSDs mean high performance. SSDs have successfully addressed much of the high-performance storage market that was basically the exclusive domain of HDDs. Within the next 12-18 months, solid-state flash arrays currently using 2D NAND are projected to improve in performance by a factor of 10x and double in density and cost-effectiveness as 3D NAND and 3D XPoint technology begins to emerge. This technological progression will significantly change the dynamics of the performance centric storage market. Compared to HDDs, SSDs have higher data-transfer rates, faster access times, better reliability, much lower latency with lower energy consumption. For most users, the consistent and high speed at which SSDs can read and write data and meet service levels is the key attraction. Because SSDs have no moving parts, they can operate at speeds far above those of a typical HDD. Fragmentation is not an issue for SSDs. Files can be written anywhere with little impact on R/W times, resulting in read times far faster than any HDD.

HDDs can handle every data type and have carried the most of load for the storage industry for years, however future challenges for HDDs are mounting. HDDs are increasing in capacity but not in performance as the IOPS (I/Os per Second) for HDDs have basically leveled off. The potential for more concurrently active data sets or files increases as HDD capacity grows and the increased contention for the single actuator arm causes erratic response time delays. Excessive RAID rebuild times are a growing concern and it can now take several days to rebuild a failed HDD in a RAID array degrading performance during the lengthy rebuild period. As HDD capacities continue to increase, total time required for the RAID rebuilding process will become prohibitive for many IT organizations and higher capacity HDDs could force a replacement for traditional RAID architecture implementations. HDD areal density is currently progressing at ~16% annually, about half the rate of tape technology. HDD capacity is often increased by adding more platters as the available surface recording are is squeezed as areal density increases. HDDs have a much higher TCO and use considerably more energy than tape or SSD.

For tape, significant technology improvements over the past 10 years have resulted in a tape renaissance. These changes enable tape to provide the lowest acquisition cost and TCO, the highest capacity, fastest data transfer rates, lowest energy consumption and most reliable storage medium available. Tape reliability has surpassed that of HDDs by three orders of magnitude. Over the last 10 years, LTO tape has increased capacity 1,400%, performance 200%, and reliability 9,900% while modern tape media life now exceeds 30 years. Tape data rates are now nearly 2x faster than HDDs and are projected to be 5x faster by 2025. New features like the Active Archive, RAIT and RAO add significant performance and access time improvements beyond traditional tape. Using tape for cloud archives, rather than HDDs, greatly reduces cloud TCO and creates a “green cloud”. The steady innovation, compelling value proposition and new architectural developments demonstrate tape technology is not sitting still and the renaissance is expected to continue indefinitely.

Summary

A fundamental shift in the storage landscape is well underway as high-performance data moves from HDDs onto flash SSD while lower activity, resiliency and archive data migrate from HDD to modern tape. For the foreseeable future, HDDs will remain the home for many primary storage, mission- critical data along with the highest availability applications, but HDD shipment growth rates have declined nearly 35% since its highpoint in 2013 and projections indicate no signs of ending. As SSDs and tape continue to show rapid improvements and re-balance

the traditional tiered storage hierarchy, HDDs will continue to feel more pressure. The storage squeeze play is underway, and HDDs are caught in the middle.

Read More

Don’t Be Blindsided By Invisible Storage Costs

Reading Time: < 1 minute

In this video, Brad Johns provides the real cost of ownership of your data storage over 10 years and explains why tape is the most affordable option for long-term data storage. Although many companies use a variety of different storage platforms, tape is the most practical and the most affordable for backup and archive.

For one petabyte of raw, non-compressible data, the cost savings versus high capacity disk is about 74% over the course of 10 years; the savings increase to 84% when compared to the cloud.  Brad Johns crunched the numbers and tape is undeniably the cheapest option for long-term storage.

Find out how you can start saving on your data storage costs. Access the free TCO calculator here.

Read More

Why is Microsoft Azure Choosing Tape?

Reading Time: < 1 minute

Listen to Marvin McNett, Principal Developer Manager from Microsoft as he explains the reasons tape is being used today in the Microsoft data center for its archival storage tier. View the video here:

Read More

What is Redundant Arrays of Independent Tape (RAIT)?

Reading Time: < 1 minute

According to the Information Storage Industry Consortium, the total data rate for tape is improving by 22.5% MB/sec per year. One concept that is driving this capacity increase in the tape industry is RAIT (Redundant Arrays of Independent Tape). RAIT is ideal for large files that need massive amounts of throughput such as in a disaster recovery scenario where you need the ability to move your whole data center electronically to another location.

In this video, Fred Moore of Horison Information Strategies explains how RAIT works.

Read More

It’s Just a Matter of Time, as Storage Demands Rise

Reading Time: 2 minutes

Rich Gadomski
Vice President of Marketing
FUJIFILM Recording Media U.S.A., Inc

I recently returned from a speaking opportunity at the PRISM Conference held in Miami on May 8thand 9th where I spoke on the Role of Tape in Today’s Modern Offsite Storage Center. In addition to holding and protecting valuable data tape cartridges for archive, backup, and disaster recovery applications, offsite vaults also play a crucial role in providing an “air gap” against cyber criminals and their alarming malware and ransomware variants. Because of tape’s powerful value proposition, it provides this functionality particularly well. It’s easily portable, has the lowest total cost of ownership, is the most reliable storage medium today, and has long archival life and high capacity.

The audience, which included many regional data vault service providers from the U.S. and abroad, didn’t have to take my word on the value prop of tape. I backed it up with studies from leading IT research companies and articles from reliable publications such as the Wall Street Journal. I sprinkled in some news about tape usage from folks like Microsoft Azure. Finally, I detailed the bright future tape has based on its ability to continue to increase in areal density which will ensure increasing capacity and cost competitiveness without sacrificing performance, thanks in part to Fujifilm’s Barium Ferrite and Strontium Ferrite magnetic particle technology.

At the end of my presentation, during the Q&A, I got the following response and question: “Tape sounds great, how come we don’t see more tape volume flowing into our vaults?” One reason for this would be the increasing data densities of tape which would reduce unit volumes. Understandably this is not great for the vault service providers, but this is actually a great benefit for end users; they can store more data on fewer units. Another factor to consider is the ever-increasing popularity of cloud storage over say, the past five years. We have seen a move from on-premises, do-it-yourself storage to outsourced cloud services. This is especially true among startups and SMBs and specific verticals where the cloud can provide unique functionality such as compute and file sharing.

But as the world turns ever so slowly, so do market conditions. Now that data storage pros have gotten comfortable with what the cloud can do, they are also starting to understand some of the downsides such as high TCO associated with egress fees and bandwidth. Security concerns might be mounting too in light of escalating cybersecurity breaches.

So at some point, tape will make sense again for many of the folks who tried cloud, considering TCO, budget constraints and the need for air gap. It’s just a matter of time, as long as demand for storage keeps rising based on relentless data growth.  And so long as the hackers don’t quit on the highly profitable multi-trillion dollar business of cybercrime.

Read More

Whitehead Cracks the Code on Cost-Effective Storage

Reading Time: 3 minutes

Whitehead Cracks the Code on Cost-Effective Storage

Whitehead Institute is a world-renowned non-profit research institution dedicated to improving human health through basic biomedical research. By cultivating a deeply collaborative culture and enabling the pursuit of bold, creative inquiry, Whitehead fosters paradigm-shifting scientific achievement. For more than 30 years, Whitehead faculty have delivered breakthroughs that have transformed our understanding of biology and accelerated development of therapies for such diseases as Alzheimer’s, Parkinson’s, diabetes, and certain cancers.

The Challenge

The Whitehead Institute, based in Cambridge, Mass., takes on some of the most complex and important medical and scientific challenges ever presented to mankind. In the 33 years since its founding, it has become one of the world’s leading molecular biology and genetics research institutes, employing multiple National Medal of Science winners. In fact, the Whitehead Institute was a key contributor to the 13-year Human Genome Project, a groundbreaking study that unlocked an entirely new understanding of how humans react to viruses, bacteria and drug therapy.

Research at the Whitehead Institute generates an enormous amount of data. Genomic sequences and microscopy images alone can add up to multiple terabytes a week. Information is further extracted from the raw data using a computing cluster that leads to the creation of processed data files. This all translates into a unique set of challenges for the Institute’s IT team. Like the scientists they support, the IT team has had to address their challenges with innovative and experimental approaches.

“The scientists do everything from basic cellular process research to cancer and other diseases research,” said Paul McCabe, Senior Unix Systems Administrator and Data Center Specialist. “It varies widely, but the common denominator is that our research generates a huge amount of very valuable data.”

Due to the historical implications of their research, scientists at the Whitehead Institute constantly have to look back at previously collected data to forge ahead with their work.

“We tend to process data pretty heavily, and we have long-term data retention requirements,” said McCabe. “We not only store the data while it’s being actively processed by our researchers, but we also need to archive that data long after research papers are published in case the data behind the papers are ever challenged.”

As the Institute’s operations have become more dynamic and strenuous in nature, the legacy systems in place have had trouble keeping up with the increased workload and demand.

“Our organization had become a 24-hour endeavor, which was a challenge that was becoming more and more difficult to manage,” explained McCabe. “We were backing up for eight hours a day, duplicating for eight hours a day, and archiving the remaining eight hours. The equipment was being pushed to its limits, and if anything went wrong… we were simply out of hours.”

The Solution

As a result, McCabe and the IT team began researching high capacity data archiving alternatives that could meet their scalability, reliability and simplicity needs. At an IT tradeshow, the team was introduced to the Fujifilm Dternity, a data archiving system that combines the simplicity of disk and the economics of tape into a highly scalable, easy-to-manage solution.

“We also liked the way Fujifilm structures its licensing model in large bands, rather than the ‘by the terabyte’ model offered by other vendors. Overall, it matched very well with our requirements.”

Currently, the Whitehead Institute IT team is storing 171 TB of unique data on the Dternity NAS, with room to grow to more than 400 TB.

The Benefits

To date, the IT team has seen an overall decrease in administrative time associated with backing up and archiving research data due to the system’s ease of use and scalability. There has been some cost savings already, but as the amount of data in the Dternity grows, the cost savings grows with it. It is significantly cheaper to keep archive data on tape as opposed to disk. “Capacity and scalability were obviously very important to us, but Dternity provided so much more,” said McCabe. “Our backup team is thrilled with how easy the system is to manage and how it frees them up to focus on other tasks, but I would say the most noticeable benefit is the overall peace-of-mind the Dternity provides us. We’re dealing with critical data, and I never have to worry because it is fully protected, backed up and available when needed.”

Read More

LET’S DISCUSS YOUR NEEDS

We can help you reduce cost, decrease vendor lock-in, and increase productivity of storage staff while ensuring accessibility and longevity of data.

Contact Us >