Blog

Is Online Object Storage Really Immune to Ransomware? Achieving True Object Storage Immutability with Tape

Reading Time: 3 minutes

April 13, 2021

By Chris Kehoe, Head of Infrastructure Engineering, FUJIFILM Recording Media U.S.A., Inc.

Object storage has many benefits. Near infinite capacity combined with good metadata capabilities and low cost have propelled it beyond its initial use cases of archiving and backup. More recently, it is being deployed as an aid to compute processing at the edge, in analytics, machine learning, disaster recovery, and regulatory compliance. However, one recent paper perhaps got a little over-enthusiastic in claiming that disk-based object storage provided an adequate safeguard against the threat of ransomware.

The basic idea proposed is that ransomware protection is achieved by having multiple copies of object data protecting against that kind of intrusion. If the object store suffers ransomware incursion, the backup is there for recovery purposes. The flaw in this logic, however, is that any technology that is online cannot be considered to be immune to ransomware. Unless it is the work of an insider, any attempt at hacking must enter via online resources. Any digital file or asset that is online – whether it stored in a NAS filer, a SAN array, or on object storage – is open to attack.

Keeping multiple copies of object storage is certainly a wise strategy and does offer a certain level of protection. But if those objects are online on disk, a persistent connection exists that can be compromised. Even in cases where spin-down disk is deployed, there still remains an automated electronic connection. As soon as a data request is made, therefore, the data is online and potentially exposed to the nefarious actions of cybercriminals.

(more…)

Read More

THE ASCENT TO HYPERSCALE – Part 2

Reading Time: 2 minutes

July 1, 2020

By Rich Gadomski, Tape Evangelist at Fujifilm Recording Media, U.S.A., Inc.

Part 2: CHARACTERISTICS OF THE HYPERSCALE DATA CENTER

In Part 1 of this series, we looked explored the definition of hyperscale data centers. Now, we’ll take a look at some of the key characteristics.

HSDCs don’t publicly share an abundance of information about their infrastructure. For companies who will operate HSDCs, the cost may be the major barrier to entry, but ultimately it isn’t the biggest issue – automation is. HSDCs must focus heavily on automating and self-healing environments by using AI and ML whenever possible to overcome inevitable and unexpected failures and delays. Unlike many enterprise data centers, which rely on a large full-time staff across a range of disciplines, HSDCs employ fewer tech experts because they have used technology to automate so much of the overall management process. HSDC characteristics include:

  • Small footprint, dense racks–HSDCs squeeze servers, SSDs (Solid State Disks) and HDDs (Hard Disk Drives) directly into the rack itself, as opposed to separate SANs or DAS to achieve the smallest possible footprint (heavy use of racks). HSDC racks are typically larger than standard 19” racks.
  • Automation–Hyperscale storage tends to be software- defined and is benefitting from AI delivering a higher degree of automation and self-healing minimizing direct human involvement. AI will support automated data migration between tiers to further optimize storage assets.
  • Users–The HSDC typically serves millions of users with only a few applications, whereas in a conventional enterprise there are fewer users but many more applications.
  • Virtualization–The facilities also implement very high degrees of virtualization, with as many operating system images running on each physical server as possible.
  • Tape storage adoption–Automated tape libraries are on the rise to complement SSDs and HDDs to easily scale capacity, manage and contain out of control data growth, store archival and unstructured data, significantly lower infrastructure and energy costs, and provide hacker-proof cybercrime security via the tape air gap.
  • Fast scaling bulk storage–HSDCs require fast, easy scaling storage capacity. One petabyte using 15 TB disk drives requires 67 drives and one exabyte requires 66,700 15 TB drives. Tape easily scales capacity by adding media, disk scales by adding drives.
  • Minimal feature set–Hyperscale storage has a minimal, stripped-down feature set and may even lack redundancy as the goal is to maximize storage space and minimize cost.
  • Energy challenges–High power consumption and increasing carbon emissions has forced HSDCs to develop new energy sources to reduce and more effectively manage energy expenses.

In Part 3 of this series, we’ll take a look at the how the value of tape is rapidly rising as hyperscale data centers grow. For more information on this topic, download our white paper: The Ascent to Hyperscale.

Read More

The Ascent to Hyperscale

Reading Time: 2 minutes

June 12, 2020

By Rich Gadomski, Tape Evangelist at Fujifilm Recording Media, U.S.A., Inc.

Part 1: What Are Hyperscale Data Centers?

Hyperscale data centers have spread across the globe to meet unprecedented data storage requirements. In this three-part blog series, we take a look at how the industry is preparing for the next wave of hyperscale storage challenges.

The term “hyper” means extreme or excess. While there isn’t a single, comprehensive definition for HSDCs, they are significantly larger facilities than a typical enterprise data center. The Synergy Research Group Report indicated there were 390 hyperscale data centersworldwideattheendof2017. An overwhelming majority of those facilities, 44%are in the US with China being a distant second with 8%. Currently the world’s largest data center facility has 1.1 million square feet. To put this into perspective the standard size for a professional soccer field is 60,000 square feet, the equivalent to about 18.3 soccer fields. Imagine needing binoculars to look out over an endless array of computer equipment in a single facility. Imagine paying the energy bill!

Hyperscale refers to a computer architecture that massively scales compute power, memory, a high-speed networking infrastructure, and storage resources typically serving millions of users with relatively few applications. While most enterprises can rely on out-of- the-box infrastructures from vendors, hyperscale companies must personalize nearly every aspect of their environment. A HSDC architecture is typically made up of tens of thousands of small, inexpensive, commodity component servers or nodes, providing massive compute, storage and networking capabilities. HSDCs are implementing Artificial Intelligence (AI), and Machine Learning (ML) to help manage the load and are exploiting the storage hierarchy including heavy tape usage for backup, archive, active archive and disaster recovery applications.

In Part 2 of this series, we’ll take a look at the characteristics of the hyperscale data center. For more information on this topic, download our white paper: The Ascent to Hyperscale.

Read More

Whiteboard Video: Using Artificial Intelligence in Cybersecurity

Reading Time: < 1 minute

April 29, 2020

Ransomware continues to threaten the security of enterprise IT infrastructures. In this Fujifilm Summit video, storage analyst George Crump talks to IBM’s Chris Bontempo about how artificial intelligence and machine learning are helping improve cybersecurity by identifying and stopping potential threats.

Watch the video here:

Read More

LET’S DISCUSS YOUR NEEDS

We can help you reduce cost, decrease vendor lock-in, and increase productivity of storage staff while ensuring accessibility and longevity of data.

Contact Us >