Is RAID Dead?- Part 3Is RAID Dead?- Part 3Is RAID Dead?- Part 3Is RAID Dead?- Part 3
  • SOLUTIONS
    • Private Cloud
    • Software Defined Storage
    • High Performance Computing
    • Open Networking
    • Low Power Transceiver
  • PRODUCTS
    • Optical Products
      • 100G-400G Transceivers
        • 100G QSFP28
        • 100G QSFP28 Single Lambda
        • 100G CFP Module
        • 200G QSFP56
        • 200G QSFP28-DD
        • 400G QSFP56-DD
        • 400G OSFP
      • 10G-50G Transceivers
        • 10G Copper SFP
        • 10G SFP+
        • 16G / 32G FC
        • 25G SFP28
        • 40G QSFP+ Module
        • 50G QSFP+ Module
      • 100M-8G Transceivers
        • Copper SFP
        • 100M SFP
        • 1G SFP  
        • 2.5G SFP
        • 4G SFP  
        • 6G SFP+ 
        • 8G SFP+ 
      • Active Optical Cable(AOC)
      • Direct Attach Cable(DAC)
      • STARPOD
      • MUX/DEMUX
        • STARMUX Chassis
        • CWDM
        • DWDM
      • FIBER TAPS
        • STARTAP Chassis 
        • High Density STARTAP
      • Cables
        • Fiber Patch Cord
        • MPO
        • LC-HD High Density MM Connector Patch Cord
      • Armour Fiber Patch Cord
      • Media Converter
      • POE Accessories
      • Optical Accessories
        • Optical Attenuators
        • Optical Connector
        • Optical Adapters
        • Smart Cleaners
    • Compute
      • AIC
    • Storage
      • Ambedded
      • Lightbits
    • Visibility
      • CGS
      • Kentik
    • Network
      • Edge Core Network
      • ip infusion
      • Kaloom
      • Omnitron
      • Packet Light
  • SERVICES
    • SMART
    • Help Desk
    • Value Added Services
  • PARTNERS
    • Reseller Partners
    • Technology Partners
  • RESOURCES
    • Webinars
    • Newsletters
    • FAQ
      • 1STACK FAQ
      • STARPOD FAQ
      • Transceivers FAQ
      • Transceiver 3rd Party FAQ
Schedule a Call
✕

Is RAID Dead?- Part 3

  • Home
  • Blog
  • Newsletters
  • Is RAID Dead?- Part 3
Is RAID Dead?- Part 2
The World’s Fastest Storage Cluster

Is RAID Dead?- Part 3

Limitations of RAID & The Rise of Software Defined Storage

This Insight article is the final installment of a 3-part series “Is RAID Dead”. If you have missed part 1 and 2, you may refer to our earlier post.  We now discuss the limitations of RAID, and offer our view that RAID is no longer useful in today’s storage system.
 
 
RAID (Redundant Array of Independent Disks) was first invented in 1987 and still in use till today.  It was designed to create a large storage system, made up of smaller HDDs (Hard-Disk Drives), to  provide the capacity and to automatically recover and rebuild data around failed drives.  It is a proven system, otherwise it would not have lasted so long.  However, newer storage systems,  based on SDS (Software-Defined Storage), have eschewed RAID in favour of either Replica or Erasure Coding in building storage arrays.   

Scale-Up vs Scale-Out

In a legacy RAID-based storage system, when more storage capacity is needed, we can add more drives to a node. A node comprises of a controller (made up of CPUs, memory, network interfaces, RAID controllers, etc) connected to one or more chassis of drive bays, where the drives are held. The maximum capacity of the node is reached when all drive bays are occupied or when the controller can no longer support additional drives, because physical performance limit is reached. When you add a new node for additional capacity, it is a new storage system. It is not an extended system of the original storage. You end up with silos of storage.

This is an example of a scale-up system, where we add additional resources to the system to increase the performance and capacity. In a scale-up system, we will reach a physical limit where we can no longer add any more resources.

In an SDS system, when more storage capacity is needed, we add more nodes to the cluster.  Each node handle one or more drives.  With each node added, the capacity and performance linearly increases, and the new capacity appears as an extension to the system.

This is an example of a scale-out system. In a software-defined scale-out system, there is almost no limit on how much capacity can be handled. Each node that is added into the cluster creates more opportunities for parallelism, and hence increases performance.

When A Drive Fails

In part 1, we discussed about RAID 5 and 6.  In a RAID 5 set, it can tolerate a maximum of 1 drive failure.  A second drive failure that happens before the first drive is replaced, and before data rebuilding completes, will mean a catastrophic loss of data.  
 
In a RAID 6 set, it can tolerate a maximum of 2 drive failures.  A third drive failure that happens before the first and second drives are replaced, and before data rebuilding completes, will mean a catastrophic loss of data.
 
There are other nested RAID, eg combining RAID 5 with 0 (RAID 50) and combining RAID 6 with 0 (RAID 60) which offers more redundancy options.  This will be discussed in a future Insight article.
 
When a drive fails in a RAID system, an alert is raised so that a replacement can be scheduled.  This is usually a scheduled downtime during low activity period.  When the drive is replaced, data rebuilding will then commence.
 
When a drive fails in a SDS system, data rebuilding automatically commences without further delay, without user intervention.  This works for both Replica and Erasure Coding.  In a Replica setup, another replica of the data is read back from other available drives, and written to the available space of other drives in the pool.  In an Erasure Coding system, the data is first recreated from other available drives, resharded to obtain the shard lost in the failed drive, and the lost shard is then written to the available space of other drives in the pool.
 
The big difference (and major advantage) between a RAID system and a SDS system is how failed drives are handled when it is detected.  Recovery times are much faster in a SDS system, because there is no lost time when a drive fails.

Drives Are Getting Bigger; Accelerated by Advances in Technology.

RAID was invented in 1987.  In 1991, Maxtor had a 40MB 3.5” HDD.  Today in 2022, the biggest capacity enterprise grade 3.5” HDD from Seagate is a whopping 18TB, or 450,000 times the size of a 40MB HDD in 1991!
 
With drives getting bigger as technology improves over time, RAID is finding it hard to keep up.  The rebuilding time for RAID to complete data rebuilding in 1990s was very good.  Today, for example, the best performance enterprise grade SAS drive of 300GB capacity takes about 1 hour, 600GB takes about 1.8 hours.  With HDDs going into the multi-TB range, data rebuilding is expected to take days!  With long rebuilding times, the risks of another drive failure happening, and causing potential total data loss, is unimaginable.  This risk is real, because the HDDs are likely from the same manufacturing date and batch when they were first installed.  When a HDD fail due to age, the chance of other HDD failing around the same timeframe is high.
 
For SDS, data rebuilding also depends on the size of each HDD, and the total amount of data stored.  Because it is a scale-out architecture, in multiple independent nodes and cluster, parallelism achieves a much shorter recovery time in hours, not days.

Backup!

It is best practice to do a backup of your data in your storage arrays before you attempt to replace a failed drive in a RAID 5 or 6 system, or in any RAID system. Any human error introduced in the drive replacement process can mean a total data wipeout. Or another unexpected drive failure while rebuilding in progress can mean the same thing. Therefore, a backup is always recommended. Doing backup means adding more time to the completion of data rebuilding. Your risks mount. SDS system commence the data rebuilding automatically. Backup is not necessary. No human involved means no human errors.

Cost and Performance

Enterprises demand that storage systems be of high reliability and performance.  To meet this requirement, legacy RAID-based storage systems use expensive enterprise-grade, dual-port SAS drives, spinning upwards to 15K RPMs.  Dual-port allows the HDDs to be connected to redundant RAID controllers.  To increase performance, each drive capacity is not maximized.  More HDDs means more IOPs (hence more performance), and lower data rebuild times.  

Together with proprietary hardware systems and software, a RAID-based system is very expensive.

SDS system is designed around failure in mind.  It leverages on inexpensive, commodity server systems, uses low-cost (but still enterprise grade) SATA drives.  With self-healing and automatic data rebuilds around failures, expensive dual-port SAS HDDs then become unnecessary.  As a scale-out architecture, with lots of small nodes controlling lots of HDDs, it is capable of delivering performance rivalling and exceeding RAID-based systems.

Growth of Data

The amount of data generated and stored has grown by leaps and bounds.  To many governments and large enterprises, data is the lifeblood.  Better decisions can be made with more and better quality data.  

Storing these data need not be expensive anymore with SDS.

By 2025, more than a quarter of the data created in the global Datasphere will be in real time.

Is RAID Dead?

We think the end is coming for RAID.  SDS has so much more obvious advantages to RAID-based systems.  CEPH is the world’s most popular SDS software.  It is OpenSource and free for anyone to use, and has been around since 2012.  It has improvements and enhancements over the years to become a very mature and stable production system.
 
It is no wonder that many of the world’s largest cloud providers uses CEPH as the backend storage system to build public cloud services at such an affordable rate.  
 
The cloud providers trust CEPH to store their data, why not you?  When your existing storage system comes of retirement age, SDS can be an option you should seriously evaluate.

RELATED ARTICLES

Is RAID Dead? Part 1
Is RAID Dead? Part 2
Is RAID Dead? Part 3


 -   -   -   - 

About Us

  • Who Are We?
  • Help & Support
  • Contact Us

Resources

  • Starview Webinar
  • Starview Newsletter
  • Downloads
@2022 All rights reserved. Privacy Policy. Terms of Service.
    Schedule a Call