A song of praise for mergerFS and SnapRAID
from IratePirate@feddit.org to selfhosted@lemmy.world on 01 May 18:00
https://feddit.org/post/29266301

Like many self-hosters, I’ve looked upon the recent price hikes for storage in utter disbelief. Faced with paying double the price of what I paid only last year for new hard drives, I dug around my hardware stash and came across about a dozen of old 2.5" 320-500 GB drives which I had saved from the dumpster once, but never deployed. After all, they were too slow to be used as PC system drives and too small in storage size for any meaningful use in a server. Now seemed like a perfect time to look for a way to put them to good use after all. And I found it in mergerFS.

For anyone not familiar with it: in spite of its name, mergerFS is not a filesystem in the sense that in order to deploy it, you’ll need to reformat any drives (although this wouldn’t have been a problem for my use case). Instead, you can theoretically take a bunch of drives (JBOD) and string them together with no modification to their filesystem, keeping existing data intact. It is agnostic of the filesystems present on the drives, meaning you can even combine volumes formatted with, say, ext4, btrfs, and xfs. All drives will show up in your filesystem as a single volume, and - depending on the policies you configured - store some data on this and some data on that drive. Since data isn’t striped, the drives will remain individually legible, i.e. there’s no need to rebuild all of them after a drive fails.

Speaking of drive failure: while mergerFS itself does not come with RAID, you can add SnapRAID to the mix for parity-based RAID (although it’s not real-time RAID; parity data must be written on schedule, so it’s not for mission-critical data that is frequently being updated and rewritten).

Combined, these two technologies allow me to have my cake and eat it too:

If this was news to you - maybe you want to give it a shot too. (I don’t consider myself a very advanced user and I found it dead simple to deploy.)
If you’re already running mergerFS and SnapRAID, feel free to showcase your use case and setup!
If you found any of the above incorrect or misleading, feel free to correct me.

#selfhosted

threaded - newest

eightys3v3n@lemmy.ca on 01 May 18:45 next collapse

Sounds interesting, thank you!

eightys3v3n@lemmy.ca on 01 May 19:45 collapse

This SnapRAID occupies an interesting middle ground between the least “proper” solution and the most “proper” solution for when more resources aren’t available or justified, it seems.

Rather than a single drive, or dozens of drives, with data randomly duplicated around or lost when individual drives die. Rather than a huge volume on zfs with it’s large setup cost and lack of expandability (until AnyRaid is done) and potentially unneeded additional functionality.

Then mergerfs is a natural expansion offering a unified way to organize and access the data that SnapRAID is securing (instead of mounting all those drives somewhere).

If someone merged these projects into one solution, and added a couple extra functions (like managing compression or deduplication, caching) it seems like it could be a comparable offer to zfs for different use cases. Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

IratePirate@feddit.org on 01 May 21:44 collapse

a comparable offer to zfs

Weeell, zfs does bring a lot more to the table than mergerFS + snapRAID, e.g. snapshotting and scrubs/bitrot protection. But then again, it does so at a much higher price.

Imagine a NAS offering with this setup by default. Much more intuitive to users I would argue.

Agreed. unRAID has something very similar and even (slightly) better (their RAID syncs automatically, not on command). But then again, unRAID isn’t FOSS.

Andres4NY@social.ridetrans.it on 01 May 22:30 collapse

@IratePirate @eightys3v3n Snapraid offers scrub/bitrot protection - check out 'snapraid scrub'.

IratePirate@feddit.org on 01 May 22:46 collapse

I stand corrected - thank you!

Andres4NY@social.ridetrans.it on 01 May 19:32 next collapse

@IratePirate Combine this with restic (or borgbackup, if that's how you swing) for a bombproof selfhosting solution.

IratePirate@feddit.org on 01 May 21:32 collapse

Good call! I’m doing regular borgbackups to an off-site, self-hosted backup server. (I’d still prefer not to be bombed! :D)

possiblylinux127@lemmy.zip on 01 May 22:57 next collapse

Honesty I wouldn’t recommend much outside of ZFS for data storage

ZFS is hard to beat

scrubbles@poptalk.scrubbles.tech on 02 May 00:54 collapse

ZFS works best for drives of the same size. It is possible to do multiple drive sizes, but it’s pretty tedious. Mergerfs is a clear winner when you have many varying sizes of drives and are okay with the speed tradeoff

possiblylinux127@lemmy.zip on 02 May 01:40 collapse

It seems worse in many regards

I’d rather do btrfs honesty

scrubbles@poptalk.scrubbles.tech on 02 May 03:48 collapse

Ok great, thanks for sharing.

Decronym@lemmy.decronym.xyz on 01 May 23:00 next collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
RAID Redundant Array of Independent Disks for mass storage
SATA Serial AT Attachment interface for mass storage
ZFS Solaris/Linux filesystem focusing on data integrity

4 acronyms in this thread; the most compressed thread commented on today has 10 acronyms.

[Thread #269 for this comm, first seen 1st May 2026, 23:00] [FAQ] [Full list] [Contact] [Source code]

adarza@lemmy.ca on 01 May 23:00 next collapse

i have three snapraids here. one with (what was at the time) new disks, and two made up of old salvaged disks like you’ve got–pulled from systems and laptops headed for the recycle bin.

irmadlad@lemmy.world on 02 May 02:34 next collapse

Was it hard to set up? Any field expedient modifications, adjustments, or fiddling? I’ve got a ton of old HDD from desktops, laptops, old servers sitting in one of my closets. Hmmmmmm

adarza@lemmy.ca on 02 May 04:07 collapse

not difficult at all, snapraid’s online documentation is very good.

yo_scottie_oh@lemmy.ml on 02 May 10:33 collapse

How do you connect them all to your host machine? Are they in an external cage w/ USB cables or mounted internally w/ SATA cables?

Overspark@piefed.social on 01 May 22:46 next collapse

SnapRAID offers an additional benefit over real RAID-like systems: it functions as a short-term backup. If you sync it daily like I do, that means that if you accidentally delete a bunch of files (old enough to have been synced, I.e. older than one day in my case) you can restore them from the SnapRAID parity.

The reverse is also true of course: if you lose a disk you also lose today’s changes to that data. So it’s most suited to large collections of rarely changing stuff like photos and videos and music IMHO.

irmadlad@lemmy.world on 02 May 02:17 next collapse

@IratePirate@feddit.org That’s pretty resourceful and pretty cool. I’m intrigued. I’m going to have to read up on that. Thanks for posting

plz1@sh.itjust.works on 02 May 03:59 collapse

This is why I went with Unraid. Being able to slap whatever drives in that I have on hand was the primary driver for getting away from btrfs (Synology). And that build was about 3 months before RAM prices started to explode last year, which I read as “all parts gonna skyrocket”, which they have.