Btrfs is a filesystem (like FAT, NTFS, and ext4), but has some distinct advantages:

  • Increased storage - thanks to compression and file deduplication, Btrfs can save you considerable amounts of storage. I have 517G of files on my Deck’s SSD, but it only uses up 410G of storage to hold those files. Compressing your filesystem can also shorten load times, especially for slower memory devices like the SD card.

  • Snapshotting - save snapshots of the file system and easily roll back if there’s a problem.

Converting to Btrfs is easy to do, and doesn’t require having you to resetup/reconfigure your deck. The linked gitlab project will do the conversion, keep all your existing files and settings, and set all the Btrfs configurations for you. The file conversion will persist through updates, and it will setup automatic deduplication of files on the drive. It also allows the Deck to automatically mount Btrfs converted SD cards, and to format new cards in the same format.

Only potential downside I know of is that Btrfs is case sensitive, where the default ext4 on the Deck uses casefolding. Basically this means that Btrfs will treat File.txt and file.txt as two different files. I’ve never run into any issues with this, but I’ve heard it can cause issues with some specific mods that inconsistently capitalize their files. There’s also always some risk whenever you make dramatic changes to your filesystem, but I haven’t really heard of anyone having problems with this. You do have to make sure you have at least 10-20% of your storage free (and a min of 10-20GB free for smaller drives) to make sure it has room for the conversion.

Overall I’ve been using Btrfs for over 6+ months on my deck, and it’s been great. I highly recommend it. I’m not an expert on it, but I’ll do my best to answer any questions on it.

    • anlumo@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      Yeah, Linus Torvalds has been pushing for ECC RAM everywhere for just this reason.

    • Fubarberry@sopuli.xyzOPM
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I know this gitlab project sets some downloading/temp folders to have COW disabled, possibly for this very reason.

    • Yote.zip@pawb.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      The filesystem metadata comes with 2 copies that can heal each other, and Copy-on-Write protects against power loss. The filesystem itself should be bulletproof.

      I feel like people reporting data loss on BTRFS are unaware that at least BTRFS is actually measuring the data loss. Bitrot is not rare, especially with how big our drives are getting. If you care about your data it should be backed up and/or RAIDed. Ext4 has no idea if your data is still intact - that’s not the same as no data loss.

        • Yote.zip@pawb.social
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          What deduplication program did you use? Deduplication is not technically an end-to-end supported feature, and depending on how the third-party program implemented it there could be issues earlier in the pipeline. I’m also not sure how a RAM bit flip would interact in this scenario - I know ZFS checks the file checksum several times during transaction but I don’t know how often BTRFS does.

          The problem is that there are a lot of people online reporting vague problems with BTRFS, but all reports have little info on how they were actually caused and are not able to be reproduced. There is no solution if we’re operating under these rules, other than to completely stop using BTRFS out of pure superstition. If there are bugs we need to be able to point to the bugs in order to fix them. As I said before, this problem you had would not have even been detected by Ext4, so I think error reporting is biased against a FS that actually checks its work. W/r/t to checking work, I think ZFS gets away with a lot more because it’s normally run in RAID setups, where healing happens automatically. BTRFS, lacking RAID5/6 support, is usually just run on a single drive, and any data integrity error becomes a target of frustration as soon as it happens.

            • Yote.zip@pawb.social
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 year ago

              I’m interested to see that reported somewhere - the duperemove repo might be a good starting point as that’s generally the standard BTRFS dedupe solution. I don’t currently see any issues on the GitHub repo about corruption (or at least the last one was 7 years ago). Again, I’m not sure if a RAM bit flip could cause this during a dedupe. Just because you scrubbed, deduped, and scrubbed again doesn’t mean there wasn’t a bit flip during the dedupe.

              As for btrfs-check vs fsck, there are just way fewer things that need to be repaired in BTRFS and ZFS because they are copy-on-write (ZFS doesn’t even have a fsck at all!). Because Ext4 is not Copy-On-Write, it’s highly vulnerable to powerloss events, and an fsck is required to replay the journal when this happens. BTRFS and ZFS make atomic COW transactions and will never be in a state of corruption on power loss. The other part of fsck is repairing the filesystem, which BTRFS and ZFS do through scrub and/or auto-heal on read instead. ZFS and BTRFS keep multiple copies of the filesystem metadata so that it can auto-repair itself while online. btrfs check is not something that should be used lightly, and I’ve seen a lot of people just run btrfs-check --repair expecting the same behavior as fsck, then wonder why they ended up with a broken filesystem.