I am archiving a vast amount of media files that are rarely accessed. I’m writing large sequential files, at peaks of about 100MB/s.

I want to maximise storage space primarily; I have 20x 18TB HDDs.

I’ve been told that large (e.g. 20 disk) vdevs are bad because resilvers will take a very long time, which creates higher risk of pool failure. How bad of an idea is this?

  • kwarner04@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Mergerfs + snapraid

    If a drive fails, you only need the parity disk to restore, not the whole array. Also, if for some reason you can’t restore, you only lose data on the failed drive.

    ZFS is great and for real NAS data, I’m a fan. But for large media files and and such that you are write once, read many, it’s a much better option I think.

    Mergerfs is just to present all 20 drives as a single mount point so you aren’t searching thru 20 drives when you want to view.

  • EchoGecko795@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I have a few 24x drive RAIDz3 pools, and as long as you can live with the longer scrub and resilver time they make a good archive or backup pool, but I would not really want it as an always on active pool. If you want to know what the estimated failure rate here is a calculator.

    Not sure if its broken or if my mobile firefox browser just doesn’t like it, but I seem to be getting an error of 0% failure rates, there are other calculators if you google them though.

    https://www.servethehome.com/raid-calculator/raid-reliability-calculator-simple-mttdl-model/

    I’ve been told that large (e.g. 20 disk) vdevs are bad because resilvers will take a very long time, which creates higher risk of pool failure. How bad of an idea is this?

    I normally only have to replace 1 drive at a time, but with RAIDz3 you have to lose 4 drives at the same time for data loss to happen. If you are using a mixed batches of drive (not all from the same run) this happening is very low, and usually happening due to some other event (overheating, fire, cow attacking the disk shelf) In the 5 years I have had these pools, the worst was losing 1 drive, and having errors pop up on another drive, which were still corrected because RAIDz3 has 3 drives of protection.

  • Dagger0@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve been told that large (e.g. 20 disk) vdevs are bad because resilvers will take a very long time

    They won’t really take any longer than on narrower vdevs unless you’re hitting CPU or controller throughput limits.

  • Big_Expression7231@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I always understood as a balancing act in your vdev sizing. Too big = long rebuild times with so many disks spinning up every time. too small = wasted $ with TB loss to redundancy (raidz2 with 3 20tb disks only utilizes 33% of the TB purchased).

    I’ve always felt you should calculate how many disks do you need to saturate your connection and go from there. 10gig trunk on your network then you’ll want 1 vdev to be able to saturate a 10gig line. any larger than that you don’t get any benefit from.

    • dragonmc@alien.top
      cake
      B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Hold on a sec…doesn’t more vdevs translate to more IOPS and speed?

      A few years back I was on this sub asking about the best way to carve my 16x2tb zfs pool, and was advised that multiple vdevs would be better for performance.

      I wound up going for 2 vdevs of 8x2tb raidz2, as it seemed to be the right redundancy to performance ratio for me.

      I do have to say though, while the pool itself has been rock solid, the performance is pretty bad…I had to turn off sync completely because it was unusable even just browsing files on the share. Even with sync turned off the performance is pretty bad still, and this is on a system with 16 cores and 64gb ram. I’d love to get some ideas on what a better performing config might be…I’m even open to ditching zfs altogether and going with reverting to mdadm if zfs is not there yet in terms of performance.

  • AcidAngel_@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    You’ll be fine. You have three disk redundancy. If one of the disks fail you’ll still have two disk redundancy during rebuild. You could lose 2 more drives during rebuild and still have all your data. What is the likelyhood of three disks failing during rebuild?

    The likelyhoods are calculated with multiplication. If the likelyhood of one disk failing during rebuild is 0.01 then the likelyhood of two disks failing is 0.01 * 0.01 = 0.0001. Three disk failure would be 0.01 * 0.01 * 0.01 = 0.000001. One out of a million.

    It’s way more likely to ruin the files by accidental deletion, destructive commands, software errors, massive hardware failure like your power supply failing and destroying all your drives at the same time or your house burning down.

    • old_knurd@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s way more likely to ruin the files by accidental deletion, destructive commands, software errors, massive hardware failure like your power supply failing and destroying all your drives at the same time or your house burning down.

      Yes. This is important and shouldn’t be overlooked.

    • Sertisy@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Draid was released just a month after I built my last raid set with vdevs. Really hoping there’s an in-place migration path someday, assuming nobody finds any bugs in the next couple years.

    • xxbiohazrdxx@alien.top
      cake
      B
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Draid removes the capability of variable stripe widths, however. Strongly recommend a special metadata device when doing draid with the same level of redundancy as the rest of the pool.

      • zrgardne@alien.topB
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        If you have lots of small files, yes this is bad.

        For videos the space lost will just be a rounding loss.

        Would be interesting to test for music. A 100k 20mbyte files. You could lose a lot of space if using 1 mbyte+ stripes that have been recommended for a while now.