My current setup is an Odroid H3+ with 2x8TB hard drives configured in raid 1 via mdadm. It is running Ubuntu server and I put in 16GB of RAM. It also has a 1TB SSD on it for system storage.

I’m looking to expand my storage via an attached storage drive enclosure that connects through a USB 3 port. My ideal enclosure has 4 bays with hardware raid support, which I plan to configure in raid 10.

My main question: How do I pool these together so I don’t need to balance between the two? Ideally all binded to appear as a single mount point. I tried looking at mergefs, but I’m not sure that it would work.

I ask primarily because I write scripts which moves data to the current raid 1 setup. As i expand the storage, I want it to continue working without worrying about checking for space between the two sets.

Be real with me - is this dumb? What would you suggest given my current stack?

  • NaibofTabr
    link
    English
    2
    edit-2
    10 months ago

    I see this:

    Supports addition and removal of devices with no rebuild times

    But also this:

    MergerFS has zero fault tolerance - if the drive that data is stored on fails, that data is gone.

    ref

    So… what happens when the USB cable gets bumped mid-write, and the drives in the external enclosure suddenly go offline? Because the cable will get bumped at some point, probably at the worst possible time.

    Genuine question, I don’t have experience with MergerFS. OP’s planned setup seems fault-prone to me, rather than fault-resistant.

    • @constantokra@lemmy.one
      link
      fedilink
      210 months ago

      I’ve not had it happen, but I imagine it’d be the same as if a sata drive failed. There’s no fault tolerance, as you pointed out. My understanding is that each drive has the same directories, and the pool shows all the files from all drives. If a drive goes offline those files should disappear.

      I use snapraid to add fault tolerance. Really basically it takes a snapshot of your files and you can recover back to that snapshot if one drive fails. You might think you’d run into a problem if a drive failed, because it might just think you’d deleted a bunch of files, but I believe the default behavior is that it throws an error and notifies you if you have deleted more than a certain threshold of files. That might not be built into snapraid. It might be part of snapraid runner, which I would recommend you use anyway to make it easier to deal with.

      So basically, you’d notice your files disappeared, or a cron job would notice, or snapraid would notice, then you’d go plug the drives back in.

      I get the concern, but if you’re that concerned about reliability then you should probably use some commercial product that won’t require much know how or intervention.

      I’m loving the flexibility of mergerfs, snapraid, and a diy nas. When I run out of physical space I’ll likely just add a few drives in a USB enclosure, so I definitely wouldn’t try to persuade you not to.