• addie@feddit.uk
    link
    fedilink
    English
    arrow-up
    10
    ·
    5 hours ago

    Assuming that these have fairly impressive 100 MB/s sustained write speed, then it’s going to take about 93 hours to write the whole contents of the disk - basically four days. That’s a long time to replace a failed drive in a RAID array; you’d need to consider multiple disks of redundancy just in case another one fails while you’re resilvering the first.

    • C126@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 hours ago

      2 parity is standard and should still be adequate. Likelihood of two failures within four days on the same array is small.

    • AmbiguousProps@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      This is one of the reasons I use unRAID with two parity disks. If one fails, I’ll still have access to my data while I rebuild the data on the replacement drive.

      Although, parity checks with these would take forever, of course…

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      4 hours ago

      That’s a pretty common failure scenario in SANs. If you buy a bunch of drives, they’re almost guaranteed to come from the same batch, meaning they’re likely to fail around the same time. The extra load of a rebuild can kill drives that are already close to failure.

      Which is why SANs have hot spares that can be allocated instantly on failure. And you should use a RAID level with enough redundancy to meet your reliability needs. And RAID is not backup, you should have backups too.