FUJIFILM INSIGHTS BLOG

Data Storage

Breaking Down Data Silos — Highlights From SC18

Reading Time: 3 minutes

By Kevin Benitez

I had the opportunity to attend SC18 last month in Dallas. Every year the Supercomputing Conference brings together the latest in supercomputing technology and the most brilliant minds in HPC. People from all over the world and different backgrounds converged this year for the 30thSupercomputing Conference.

As you can imagine, some of the demonstrations were absolutelymind-blowing and worth sharing. For starters, power consumption in data centers is becoming more of a challenge as data rates continue to surge. Fortunately, 3M was live on the trade show floor tackling this issue by demonstrating immersion cooling for data centerswhich has the potential to slash energy use and cost by up to 97%. As this technology continues to evolve,we could see huge gains in performance and in reducing environmental impacts.

The race to dominate quantum computing continues! IBM’s 50-Qubit quantum computer made an appearance at this year’s show. What does it mean to have a computer with 50 qubits working perfectly? (Side note, in quantum computing a qubitis the basic unit of quantum information). According to Robert Schoelkopf, a Yale professor, if you had 50 or 100 qubitsyou could “do unfathomable calculations that can’t be replicated on any classical machine, now or ever.” Although the quantum computer churns out enough computational power to rank within the top ten supercomputers in the world,the device can only compute for 100 milliseconds due to a short-lived power supply.

StrongBox Data’s flagship product, StrongLink, was demonstrated on the show floor as a way to store and contain the vast amount of data that research universities and laboratories are producing. StrongLinkis a software solution that simplifies and reduces the cost of managing multi-vendor storage environments. StrongLink provides multi-protocol access across any file system, object storage, tape and cloud in a global namespace. Users maintain a constant view of files regardless of where they arestored, which maximizes their storage environment for performance and cost.

Recently the University of Southampton’s Supercomputer Iridis 5 teamed up with StrongLink to get more value out of its data. Oz Parchment, Director of the University’s iSolutions IT support division, commented in March saying: “One wayStrongLink interested us was its cognitive component, the ability to look at and match up metadata at scale, which gets interesting when you combine that with different data infrastructures. Our set up currently includes large-scale tape stores, large-scale disc stores, some of that being active data, some of that being nearline data, some being effectively offline data. But then, by linking these into the [Iridis] framework, which StrongLink allows us to do, we can connect these various data lakes that we have across the research side of the organization, and begin to create an open data space for our community where people in one discipline can look through data and see what kinds of data are available in other communities.“

Never has HPC been more crucial. As we say here at Fujifilm “Never Stop Transforming Ourselves and the World.”

Read More

Flash, HDDs and Tape Slay Data Challenges

Reading Time: 3 minutes

By Rich Gadomski

At Storage Visions 2018, held in Santa Clara this past October, I had the opportunity to talk about the future outlook for tape as attendees wanted to know how they were going to store all the data that’s being created. The session I participated in was entitled “Epic Battles with Classic Heros – Flash, HDDs and Tape Slay Data Challenges.” As the title suggests, battling exponential data growth takes more than one storage media type to effectively handle the deluge of data that’s being created (now estimated to be 33 ZB in 2018 and growing to 175 ZB by 2025, according to IDC).

As our session moderator, Jean Bozeman from Hurwitz & Associates pointed out in her introduction, a variety of storage workloads create the need for a spectrum of storage technologies. Certainly the need for speed at the top of the storage pyramid is being addressed by performance HDD and increasingly by ever evolving solid state drives.

The need for longer term storage at scale is the domain of capacity HDD and of course, tape. Managing the data deluge is all about having the right data on the right storage medium at the right time. Not everything can or should be stored on expensive high performance flash. You need high capacity optimized media for long term data retention and that’s where HDD and tape come in to play (often in a user friendly active archive environment).

When it comes to the future of capacity in the domain of HDD, current perpendicular magnetic recording technology has reached  ‘super paramagnetic” limitations where increasing areal density to increase capacity is not a viable option. With helium filled HDDs, more platters can fit in the same form factor as air filled HDDs but this has not allowed a significant growth in capacity.  New technology concepts such as Heat Assisted Magnetic Recording (HAMR) and Microwave Assisted Magnetic Recording (MAMR) are on the horizon but market availability has been delayed. There is also the potential of vacuum sealed HDDs with better operating characteristics than helium that could help HAMR and MAMR HDDs get up to 40 – 50 TB at some point in the future.

But fundamentally, increasing capacity of a storage medium and ultimately reducing its cost is best achieved by increasing areal density. This is where magnetic tape technology really shines as today’s modern tape with per cartridge capacities already as high as 20 TB having very low areal densities compared to HDD.

Therefore, tape has a long runway before facing areal density limits and as a result, future tape roadmaps have the potential to achieve up to 220 TB on a standard form factor cartridge using Barium Ferrite (BaFe) magnetic particles and up to 400 TB using next generation Strontium Ferrite (SrFe). At the same time, both BaFe and SrFe can maintain magnetic signal strength integrity for at least 30 years making them ideal not only for high capacity but for cost effective long term data retention as well.

“No wonder the cloud guys are all using tape now,” exclaimed an attendee in the audience during the Q&A. They certainly also use a lot of flash and a lot of disk too. It is an epic battle and it takes a spectrum of storage technologies to slay the data challenges.

Read More

YES Network Says “Yes” to Migrating to a Virtual Environment

Reading Time: 2 minutes

Since its launch in March 2002, the YES Network has been the number 1 rated regional sports network in the USA. From its inception, YES has striven for the highest-quality picture and sound and to be at the leading-edge of broadcast technology. To this end, YES was constructed as an all-digital facility.

To manage its growing library, the network launched a digital archive project. Initially, the plan was to find a way to convert HD content into a file format that could be stored in a system so that producers and directors could easily find and retrieve selected plays to be integrated into classic game and other shoulder programmes. Avid had provided the YES editing systems from the outset, and the original five Avid editing systems were connected to an Avid Omega JBOD array for storage.

This paper provides a deep dive into the pros and cons of local, cloud, solid-state and linear tape-open storage solutions. It opens with YES Network Director of Engineering and Technology John McKenna’s account of the YES Network’s digital transformation, and is followed by YES Network Engineering Department Project Manager of Broadcast Systems Jason Marshall’s summary of modular to virtual technology migration. This paper details ratios on high-performance broadcast data systems, as well as power consumption and solution trade-offs. This paper aims to gain the reader’s confidence in virtualising a media asset system as well as converting linear systems to packet-based media technologies including transcoding, Active Archive and future SMPTE 2110 solutions.

Read the full paper here: Migrating to a Virtual Environment

Read More

Used / Recertified / Reconditioned Tape – Is It Worth the Risk?

Reading Time: 3 minutes

By Ken Kajikawa 

“If it’s too good to be true, it probably is.” I’m not sure who first coined that old adage, but it certainly applies to used data tape regardless of whether it’s called “recertified” or “reconditioned.” Let’s review some of the facts.

Recertified? Is there such a thing as legitimately recertified tape? The answer is no and here’s why. The equipment and procedures to fully certify and control the quality of tape performance are available only to licensed manufacturers. No one else has it–so used tape can’t be “re certified” by third parties.  Fujifilm does not recertify tape.  Fujifilm only provides new/unused product to the marketplace.

Reconditioned? Okay, how about reconditioned tape? A tape cartridge cannot be reconditioned either. Once a tape is scratched, creased,  edge damaged, degraded–it can’t be restored to its original factory-new condition. Additionally, a recertifier’s equipment and practices such as data erasure could damage the tape.

Data Erasure? Do the so-called recertifiers actually erase the data from the previous owner? Not exactly. Typically a table of contents overwrite is all that is performed, if anything. To actually overwrite the data on a common LTO data tape would take hours and degaussing is not a quick fix as it would destroy the servo tracks and render the tape useless. A few years back, Fujifilm had acquired 50 “recertified” LTO data tapes from recertified tape resellers. It was determined through expert analysis that 48 out of 50 tapes still contained original user data including highly confidential customer data.

So my advice here is: don’t be a buyer or seller of used tape!

Still considering used tape? Read on for more details about some of the potential hazards! 

  • Storage Environment: Tape must be properly stored and cared for in controlled environments (preferably cool, dry conditions for archive storage 16°C to 25°C and 20% to 50% relative humidity). Used tapes may have been stored for extended periods under poor environmental controls. Tape media degradation and damage are all possibilities that will not be readily apparent to the end user.
  • Care and Handling: Tape must be properly handled. Poor transportation and handling practices, (dropped tapes) could result in internal damage, poor tape pack alignment, and/or tape edge damage.
  • Proper Tape Operating Environment During Prior Usage: Airborne contaminants or dust can get wound into the tape-pack and damage the tape. In addition, excessive heat at the tape head interface can damage tape. This can be a result of drives that were running above maximum operating temperature specification due to integration of inside units lacking sufficient ventilation. Or, a combination of ambient room temperature being too hot and the drive being inside a unit or rack with marginal allowance for thermal transfer (not enough cooling capability under higher ambient temperature conditions).
  • Drive Maintenance: A previous user’s improperly maintained or malfunctioning tape drive could have damaged the tape, or the mechanical functionality of the cartridge.
  • Risk of Damage to Existing Drives and Tapes: Many tapes share the same drives in a typical usage environment. Debris left behind by used tape that is scratched or otherwise physically damaged will certainly contaminate good tapes that follow on those same drives.

Fujifilm always recommends against used/recertified media because the customer can never be assured of the quality, performance, and reliability in several key areas as discussed above.

If you are taking the time and resources to back up your data, why risk your data to a cartridge with unknown history?

Fujifilm high capacity data cartridges are consistently manufactured to the highest specifications and standards and fully supported and backed by a Lifetime Warranty against manufactures defects.

 

Read More

The Cost Viability of Tape for Data Protection and Archive

Reading Time: < 1 minute

The most efficient data protection utilizes proper archiving, and with the data growth rate almost doubling, tape storage is growing from an archiving standpoint. In this Fujifilm Summit video, Dr. James Cates, SVP of Archive Development at Oracle, discusses the advantages of tape for archiving. Watch it here:

 

Read More

How Does Google Do Backups?

Reading Time: < 1 minute

In this Fujifilm Summit video, Raymond Blum, Staff Site Reliability Engineer at Google, explains how Google handles its backups and the importance of diversity when it comes to storage. Watch it here:

Read More

How Do You Get Renewables to Power Data Centers?

Reading Time: < 1 minute

By diversifying your renewable energy mix, you can achieve energy efficiency gains even with data centers which typically carry large power loads.  In this Fujifilm video, Craig Lewis, Executive Director of Clean Coalition talks about how tape storage allows us to do more work with more data storage using a lot less energy.  Watch it here:

 

 

Read More

How to Store a Zettabyte

Reading Time: < 1 minute

According to Aaron Ogus, partner development manager for Microsoft Azure Storage, storing a zettabyte of storage will be financially feasible in 2020. Data growth will always exceed expectations, and tape has a more credible road map and one that is easier to get to with not as much investment. Learn more in this video blog:

Read More

Taking Advantage of LTO-7 “Type M”

Reading Time: 3 minutes

Rich Gadomski
Vice President of Marketing
FUJIFILM Recording Media U.S.A., Inc

Sometimes change can lead to confusion, or at least to a lot of questions. Take changes in the tax laws for example. I won’t get into details, but suffice it to say I feel sorry for tax preparers come 2019!

In the realm of tape storage, we too have had some changes to the traditional roll-out of next-generation LTO tape drives and media. But rather than focus on confusing change, let’s focus on the luxury of having options. That’s exactly what we have in the option offered with the introduction of LTO-8 drives that can use standard LTO-8, LTO-7, or… LTO-7 Type M tape cartridges.

For the first time in the history of LTO technology dating back to 2000, users can now write to the previous generation tape cartridge at a higher density than previously allowed. Specifically, LTO gen 8 drive users can choose the option to write 9.0 TB native at 300 MB per second on a new/unused LTO-7 tape that previously maxed out at 6.0 TB native on LTO-7 drives. Assuming 2.5:1 data compression, 22.5 TB can be stored on a LTO-7 Type M cartridge with transfer speeds up to 750 MB per second. That’s a lot of capacity… and really fast!

Beyond extra capacity, LTO-7 Type M is a good option economically speaking, since there is no price difference between standard LTO-7 media already in the market and LTO-7 Type M media. This means LTO-7 Type M is 33% less on a cost per TB basis than LTO-7 and 45% less than LTO-8 media at current internet reseller prices.

Taking advantage of the LTO-7 Type M option is easy. First, make sure your tape library is equipped with LTO-8 drives and is upgraded to initialize LTO-7 Type M media for 9.0 TB capacity. If necessary, contact your library vendor to confirm this detail or to enable it. For your library to distinguish standard LTO-7 from Type M, you need to use “M8” designated barcode labels as opposed to “L7” designated barcode labels. To verify, you will see the characters“M8” printed to the right of the volser number on the barcode label where you would normally see “L7”.

Finally, like a good drug commercial, there are a few disclaimers to be aware of, but in this case the side-effects don’t sound worse than the disease known as: exponential data growth coupled with shrinking budgets. So here we go:

  • LTO-7 Type M can’t be initialized in standalone LTO-8 drives, library system required. But once initialized by the library, the Type M tape can be used in a standalone LTO-8 drive (read/write)
  • Once initialized for 9.0 TB, the Type M cartridge will not be compatible with LTO-7 drives
  • Type M cartidges will not be read/write compatible with LTO-9 drives

It’s always nice to have the luxury of options especially if that means be able to handle a lot more data at a super attractive price!

Read More

The Impact of GDPR on Your Data Management Strategy

Reading Time: 2 minutes

By Floyd Christofferson,
SVP of Products at Strongbox Data

It is no illusion that every time you turn around it seems there is another report of a high-profile hack of sensitive personal data, impacting hundreds of millions of people all over the world. The recent Equifax hack released personal financial data of over 143 million consumers, but that was not an isolated incident. In 2016 and 2017 so far there have been at least 26 major hacks around the world that have released personal data of more than 700 million people. These include hacks of telecommunication companies, financial institutions, government agencies, universities, shopping sites, and much more.

The hacks are not a new problem. But in a global economy with often conflicting political and economic priorities at stake, there has been no comprehensive approach to ensuring people have the right to protect and delete if they want, all of their personal data.

The European Union’s new GDPR (General Data Protection Regulation) went into effect in May 2018. Although GDPR is designed to protect European citizens, the rules and penalties apply to any company from any country who does business in Europe. And the penalties are significant, with companies at risk of being fined up to 4% of their global annual gross revenues or €20 million (whichever is greater) for failing to comply with strict right-to-be-forgotten and privacy protections for customer data.

As a result, there is a growing panic among businesses as they try to figure out how to solve this problem in time, and how to do so with existing data management and storage resources that are not designed for this task. And the concern is not only in Europe. Companies in the US and around the world who have customers in Europe are also scrambling to ensure they are in full compliance by the deadline. But according to Gartner, by the end of 2018 over 50% of companies affected by the GDPR worldwide will not be in full compliance with its requirements.

In this paper we offer an overview of the key provisions of GDPR that impact storage and data management for both structured and unstructured data. In subsequent technical briefs, we will go into more detail about specific technical solutions to help ensure your data environment is in compliance, even with your existing storage and data infrastructure.

Read More

LET’S DISCUSS YOUR NEEDS

We can help you reduce cost, decrease vendor lock-in, and increase productivity of storage staff while ensuring accessibility and longevity of data.

Contact Us >