LF Suggestions on how to architect new setup, 5x22TB + 3x4TB NVME
from chazwhiz@lemmy.world to selfhosted@lemmy.world on 19 Nov 14:36
https://lemmy.world/post/39019749

I’m way over-analyzing at this point, so I’d love any suggestions or advice on approaching this new setup.

Today I run OpenMediaVault 7 on an i5 NUC with a cheap USB enclosure with 4x8TB in RAID5 (Hardware RAID, which I greatly regret).

Upgrading to a Minisforum N5 NAS Pro with 5x22TB and 3x4TB NVMEs.

My primary use is media which is the vast bulk of storage. I also point some Time Machine backups at it and use it to archive “what if I need this someday” stuff from old external drives we’ve used over the years. But all the critical stuff is also sent to Backblaze, so this is not primary backup per se, more for the local convenience.

I have decided against Proxmox, so this will be OMV (or maybe Unraid) bare metal. I’ve also ruled out TrueNAS. Proxmox and TrueNAS both just add too many new “pro” layers I don’t really want to deal with.

I’m considering:

Setup 1:

Setup 2:

Setup 3:

Setup 4:

The caching stuff I clearly don’t understand but I’m very interested in. I’m thinking about it mostly in “download and consume immediately” situations. Today I have a huge bottleneck in unpacking and moving. I’ve got 1gb fiber and can saturate it, getting a complete iso in just a few minutes, but then it’s another 30min plus waiting for that to actually be usable.

Again, I’ve completely paralyzed myself with all the options, so slap me out of it with whatever you’ve got.

#selfhosted

threaded - newest

avidamoeba@lemmy.ca on 19 Nov 14:57 next collapse

ZFS. It runs on whatever RAM you give it.

CondorWonder@lemmy.ca on 19 Nov 15:58 next collapse

For your second scenario - yes you can use md under bcache with no issues. It becomes more to configure but once set up has been solid. I actually do md/raid1 - luks - bcache - btrfs layers for the SSD cache disks, where the data drives just use luks - bcache - btrfs. Keep in mind that with bcache if you lose a cache disk you can’t mount - and of course if you’re doing write-back caching then the array is also lost. With write-through caching you can force disconnect the cache disk and mount the disks.

chazwhiz@lemmy.world on 19 Nov 16:37 collapse

With write-back you’d only lose what was in cache right? Not the entire array?

CondorWonder@lemmy.ca on 19 Nov 16:50 collapse

Bcache can’t differentiate between data and metadata on the cache drive (it’s block level caching), so if something happens to a write-back cache device you lose data, and possibly the entire array. I wouldn’t use bcache (or zfs caching) without mirrored devices personally to ensure resiliency of the array. I don’t know if zfs is smarter - presumably is can be because it’s in control of the raw disks, I just didn’t want to deal with modules.

CmdrShepard49@sh.itjust.works on 19 Nov 17:17 next collapse

Today I have a huge bottleneck in unpacking and moving. I’ve got 1gb fiber and can saturate it, getting a complete iso in just a few minutes, but then it’s another 30min plus waiting for that to actually be usable.

Are you doing this all manually or using the *arr suite? For me, this process takes a minute or two depending on the size of the files with Proxmox and ZFS but even previously on Windows 10 with SnapRAID it was quick.

chazwhiz@lemmy.world on 19 Nov 20:23 collapse

Arr and Sab

WhyJiffie@sh.itjust.works on 19 Nov 20:36 collapse

Take the ZFS plunge - My only real concern is the overhead

you shouldn’t worry about ZFS overhead if you are planning to use mergerfs.

you can tune the memory usage of ZFS significantly. by default it targets using half of your RAM, but on a home setup that’s wasting resources, you should be able to limit the arc cache to 1-2 GB, maybe somewhat more depending on how you want to use it. It’s done with sysctl or kernel parameters.