k8s storage (CSI)
from eutampieri@feddit.it to selfhosted@lemmy.world on 15 Feb 08:34
https://feddit.it/post/26790529
from eutampieri@feddit.it to selfhosted@lemmy.world on 15 Feb 08:34
https://feddit.it/post/26790529
I’m looking for storage classes for a multi node cluster. I’m currently using Longhorn and NFS, but I’m not happy with the performance. My cluster doesn’t have beefy nodes, so Ceph/Rook is out of the question (for now).
Nodes:
- 8 GB RAM, 4 cores VM, control plane. 256 GB SSD
- 4 GB RAM, 2 cores, control plane, currently cordoned. 128 GB SSD
- 8 GB RAM, 4 cores, ARM, control plane. 512 GB SSD
- 8 GB RAM, 4 cores. 256 GB SSD
- 16 GB RAM, 6 cores. 256 GB SSD + 1 TB HD
- RPi 4, 4 GB RAM. 128 GB SSD
#selfhosted
threaded - newest
I’m currently using Piraeus / LINSTOR and am quite happy with it: github.com/piraeusdatastore/piraeus
Found a Reddit thread that says that LINSTOR has a lower CPU usage (which is my main gripe with Longhorn). Might as well try this and report back. Is there a good way to migrate PVs and PVCs?
I can confirm that the resource usage is quite low indeed. I only used it with Nomad instead of Kubernetes, so I can’t comment on how to best migrate PVs and PVCs.
Thanks
Volsync
Thanks!
Has anyone tried this github.com/awslabs/mountpoint-s3-csi-driver?
Perhaps with a Garage DaemonSet as a backend?
I’d expect the performance to be awful but it still has relatively niche usecases, especially where performance isn’t a concern. I’m imagining legacy apps that don’t speak S3.
Thanks for the feedback! So not what I’m looking for
Openebs mayastor
But you could fit ceph on that I think. As long as your network between nodes is fast enough.
I used to use Ceph at work and I’m a bit reluctant to use it at home. Don’t get me wrong, it’s really cool, but those were beefy nodes, and I only have 1 Gbps between nodes
Mayastor or Linstor, Ceph requires too much CPU for these nodes
I’m not sure you’ll get nice performance in local network with small appliances (consumer network hardware, mini PCs and rpi 4). I’ve never got sub-ms network disk access on 1Gbps switch and router. In the end I’ve done the opposite - I’ve added one k8s host with a lot of storage, and any storage services are deployed there. All the other k8s services rely on local SSDs.
Which CSI?
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
3 acronyms in this thread; the most compressed thread commented on today has 7 acronyms.
[Thread #95 for this comm, first seen 15th Feb 2026, 11:20] [FAQ] [Full list] [Contact] [Source code]
I have two storage nodes and one is much faster than the other.
I’m currently evaluating a juicefs deployment based on two minio instances (one per node, replicated with async bucket replication) through a load balancer (sidekick) in failover. Because juicefs also needs a db for metadata, I went with valkey + sentinel.
Juicefs provides a CSI driver that supports ReadWriteMany volumes and CSI snapshots and manages both read and write cache. Performance is much much better than Ceph. In theory it should be riskier (because of the async replication) but in practice I haven’t yet lost a bit.
Thanks! A bit more involved that I’d have thought but still worth considering! Could you update us after your evaluation?