Helm chart installable solutions?
from lukkon@piefed.social to selfhosted@lemmy.world on 04 Feb 21:14
https://piefed.social/c/selfhosted/p/1740479/helm-chart-installable-solutions

Hi I got home assistant, immich, jellyfin and recently tried to set up next cloud with helm. Not everything is as smooth as I expected.

Do you guys have any other ideas for similiar setup on local kubernetes cluster?

#selfhosted

threaded - newest

scrubbles@poptalk.scrubbles.tech on 04 Feb 23:31 next collapse

Helm has worked well for me, what’s the problem you had?

lukkon@piefed.social on 05 Feb 07:30 collapse

Mostly pv provisioning and db setups. I use my external hard drive as k3s storage (mnt/k3s) and e.g. for immich I failed to provide custom path (mnt/k3s/immich-media), it always complaints about some access rights.

scrubbles@poptalk.scrubbles.tech on 05 Feb 16:29 collapse

So you have a classic issue of datastorage on kubernetes. By design, kubernetes is node-agnostic, you simply have a pile of compute resources available. By using your external hard drive you’ve introduced something that must be connected to that node, declaring that your pod must run there and only there, because it’s the only place where your external is attached.

So you have some decisions to make.

First, if you want to just get it started, you can do a hostPath volume. In your volumes block you have:

volumes:
  - name: immich-volume
    hostPath:
      path: /mnt/k3s/immich-media # or whatever your path is

The gotcha is that you can only ever run that pod on the node with that drive attached, so you need a selector on the pod spec.
You’ll need to label your node with something like kubectl label $yourNodeName anylabelname=true, like kubectl label $yourNodeName localDisk=true Then you can apply a selector to your pod like:

    spec:
      nodeSelector:
        localDisk=true

This gets you going, but remember you’re limited to one node whenever you want data storage.

For multi-node and true clusters, you need to think about your storage needs. You will have some storage that should be local, like databases and configs. Typically you want those on the local disk attached to the node. Then you may have other media, like large files that are rarely accessed. For this you may want them on a NAS or on a file server. Think about how your data will be laid out, then think about how you may want to grow with it.

For local data like databases/configs, once you are at 3 nodes, your best bet with k3s is Longhorn. It is a HUGE learning curve, and you will screw up multiple times as a warning, but it’s the best option for managing tiny (<10GB) drives that are spread across your nodes. It manages provisioning and making sure that your pods can access the volumes underneath, without you managing nodes specifically. It’s the best way to abstract away not only compute, but also storage.

For larger files like media and linux ISOs, then really the best option is NFS or block storage like MinIO. You’ll want a completely separate data storage layer that hosts large files, and then following a guide like this you can enable mounting of NFS shares directly into your pods. This also abstracts away storage, you don’t care what node your pod is running on, just that it connects to this store and has these files available.

I won’t lie, it’s a huge project. It took about 3 months of tinkering for me to get to a semi-stable state, simply because it’s such a huge jump in infrastructure, but it’s 100% worth it.

r0ertel@lemmy.world on 05 Feb 17:17 collapse

Excellent write-up. I had Nextcloud running on K3s with its files on a NAS which were shared with Minio and it worked well. I’m looking into Longhorn, but only have 2 nodes and it wants at least 3. I’m reevaluating my resiliency needs in favour of simplification.

scrubbles@poptalk.scrubbles.tech on 05 Feb 17:40 collapse

If you’re only at 2 nodes, then I think host paths with node selectors are what you should go with. That gets you up and running in the short term, but know that the conversion later to something like Longhorn will be a process. (Creating the volumes, then copying all the data over, ensuring correct user access, etc).

r0ertel@lemmy.world on 05 Feb 18:59 collapse

I have something similar to host paths with node selectors: an NFS provisioner for PVs. The provisioner is tied to the node with the large disk. It’s not resilient to node outages, but allows me to spread pods across the nodes. For my deployments, I’m preferring to use S3 storage wherever possible.

Lodra@programming.dev on 05 Feb 00:08 next collapse

Honestly, I avoid helm charts as much as possible. My preference is raw manifests + kustomize, deployed by Argo

cymor@midwest.social on 05 Feb 01:31 next collapse

Prometheus and Grafana ElasticSearch Fluentbit and Kibana

just_another_person@lemmy.world on 05 Feb 02:55 next collapse

Helm sucks. You don’t even need it for what you’re trying to do.

fruitycoder@sh.itjust.works on 05 Feb 05:24 next collapse

What k8s distro? (Vanilla, k3s, rke2, minikube, harvester, whatever redhats open source open shift is called, etc?) What issues?

lukkon@piefed.social on 05 Feb 07:28 collapse

K3s

fruitycoder@sh.itjust.works on 05 Feb 14:59 collapse

That’s a good one ARM or x86?

oranki@sopuli.xyz on 05 Feb 06:25 next collapse

I’ve been favoring manual YAML definitions, using kluctl as a manual nicety in between.

Not quite gitops, though the definitions are in a git repo to keep track of changes. Whenever I change something, it’s deployed manually, kluctl gives a nice diff of changes.

Decronym@lemmy.decronym.xyz on 05 Feb 17:20 collapse

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
NAS Network-Attached Storage
NFS Network File System, a Unix-based file-sharing protocol known for performance and efficiency
k8s Kubernetes container management package

4 acronyms in this thread; the most compressed thread commented on today has 6 acronyms.

[Thread #63 for this comm, first seen 5th Feb 2026, 17:20] [FAQ] [Full list] [Contact] [Source code]