How Do you keep your services updated?
from jeena@piefed.jeena.net to selfhosted@lemmy.world on 12 Apr 04:27
https://piefed.jeena.net/c/selfhosted/p/360683/how-do-you-keep-your-services-updated
from jeena@piefed.jeena.net to selfhosted@lemmy.world on 12 Apr 04:27
https://piefed.jeena.net/c/selfhosted/p/360683/how-do-you-keep-your-services-updated
Back in the day it was nice, apt get update && apt get upgrade and you were done.
But today every tool/service has it’s own way to being installed and updated:
- docker:latest
- docker:v1.2.3
- custom script
- git checkout v1.2.3
- same but with custom migration commands afterwards
- custom commands change from release to release
- expect to do update as a specific user
- update nginx config
- update own default config and service has dependencies on the config changes
- expect new versions of tools
- etc.
I selfhost around 20 services like PieFed, Mastodon, PeerTube, Paperless-ngx, Immich, open-webui, Grafana, etc. And all of them have some dependencies which need to be updated too.
And nowadays you can’t really keep running on an older version especially when it’s internet facing.
So anyway, what are your strategies how to keep sanity while keeping all your self hosted services up to date?
#selfhosted
threaded - newest
Damn I’m lucky I just run small game servers cause the old way still works for me, aside from piehole that needs to be updated but it squeels at me when it needs it so I dont have to remember.
It’s just a hobby so i know I have room for improvement, but the bigger my environment gets the more difficult it is to keep everything completely up to date, like you said. Given that, my main priorities are:
Now that being said, I’ve started to use ansible playbooks for deploying OS updates. I have a playbook that uses default options when doing an apt upgrade and it also works for the docker engine user prompt.
About 75% of my services are native installs in LXCs and I try to always install by including the app repo so that apt can update it and the other 25% are in docker. I used to use watchtower but that’s no longer maintained, so I do container updates manually as needed.
It’s not perfect, but it’s just for fun so 🤷
Hm, I didn’t think of ansible, that’s something I should think about to use.
I wonder if anyone ever wrote an update aggregator that would find all package managers, containers and git repos and whatnot and just do all of them.
Some are a right pain to update, such as Nextcloud. Installing a monthly update should not feel like an enterprise prod deployment.
It’s kinda ironic that package managers have caused the exact problem that they are supposed to solve.
I am developing a script which will do that specifically for my services.
Right now at the first stage it only checks GitHub, Codeberg, etc. To check if there is a new version compared to what each service is running right now.
https://git.jeena.net/jeena/service-update-alerts
I am extending it now with a auto update part, but it’s difficult because sometimes I can’t just call a static script because some other migration things need to run. So I have a classifier which takes the release notes and let’s a local LLM to judge if it’s OK to run the automation or if I need to do it manually. But for that I am collecting old release notes as examples from each service. This takes forever to do so I only have it done for PieFed, PeerTube, Immich and open-webui, and I didn’t push those changes to the public repo yet.
cd appname && dockup && cd ..Dockup being an alias for
docker compose pull && docker compose up -dRepeat for the few services I have.
So everything is dockerized and points to :latest?
What about the necessary changes to the docker compose files? What about changes necessary in nginx configs?
I guess you also read each release notes manually?
Not running anything that I’ve had to alter compose files. Also never had change nginx configs. Maybe I’m just running particularly stable stuff.
I usually read update notes yes, but I’d be lying if I said I was always thorough.
I don’t understand. docker compose up starts the container. When does the docker compose pull happen? Or is there an update directive in the compose file?
Whoops, I forgot that the alias includes a pull for the latest versions.
Ah. I thought there is an option in docker compose I could use.
And a docker image prune - a while the containers are running? :)
Yes, I usually do that after I check all the services are running okay.
Renovate + GitOps. Check out github.com/onedr0p/cluster-template
If you don’t like Kubernetes, you can get a similar setup with doco-CD. Only limitation is that dococd can’t update itself, but you can use SOPS and Renovate all the same for the other services.
That or Komodo when using docker. Renovate is really good, you always know which version you’re at, you can set it up to auto merge on minor and/or patch level, it shows you the release notes etc.
This tutorial is good: nickcunningh.am/…/how-to-automate-version-updates…
I guess auto merge isn’t enabled, since there’s no way to check if an update doesn’t break your deployment beforehand, am I right?
You can configure automerge per stack and also if it’s allowed on patch, minor or major upgrades.
Yes, but usually when you use automerge you should have set up a CI to make sure new versions don’t break your software or deployment. How are you supposed to do that in a self-hosting environment?
Ideally, you have at least two systems, test updates in the dev system and only then allow it in prod. So no auto merge in prod in this case or somehow have it check if dev worked.
Seeing which services are usually fine to update without intervening and tuning your renovate config to it should be sufficient for homelab imho.
Given that most people are running :latest and just yolo the updates with watchtower or not automated at all, some granular control with renovate is already a big improvement.
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
3 acronyms in this thread; the most compressed thread commented on today has 8 acronyms.
[Thread #233 for this comm, first seen 12th Apr 2026, 05:50] [FAQ] [Full list] [Contact] [Source code]
One of the reasons I switched to YunoHost (the other being backups).
Personally I just wrote a bash script that does all of my regular updates and I run it manually whenever
And it’s stable enough for you? Do you go service by service or is it good enough for everything?
For docker compose I have a part of the script that gets all subdirs of “projects” dir and for each one does an update (that way any new service will be updated without having manually specify in the script) for everything else I just hard coded the update process.
Generally 90% of my updates are just running the script, on the other 10% I do some manual work (like updating configs, etc)
But for the most part this is me refusing to use already existing tools that could probably do most of this better
Everything I run, I deploy and manage with ansible.
When I’m building out the role/playbook for a new service, I make sure to build in any special upgrade tasks it might have and tag them. When it’s time to run infrastructure-wide updates, I can run my single upgrade playbook and pull in the upgrade tasks for everything everywhere - new packages, container images, git releases, and all the service restart steps to load them.
It’s more work at the beginning to set the role/playbook up properly, but it makes maintaining everything so much nicer (which I think is vital to keep it all fun and manageable).
Yeah, For some reason I didn’t think of ansible even though I use it at work regularly. Thanks for pointing it out!
Just a word of caution…
I try to upgrade 1 (of a similar group) manually first to check it’a not foobarred after the update, then crack on with the rest. Testing a restore is 1 thing, but restoring the whole system…?
+1 for ansible.There’s a module for almost everything out there.
Podman automatically updates my containers for me.
Because you point to :latest and everything is dockerized and on one machine? How does it know when it’s time to upgrade?
Yeah only for :latest containers, that’s true. It automatically runs a daily service to check whether there are newer images available. You can turn it off per container if you don’t want it.
One of the nice things about it is that I have containers running under several different users (for security reasons) so that saves me a lot of effort switching to all these users all the time.
It's a bad practice to use latest tag
Depends on what you want to do. For production with sensitive data, yes it is. For my ytdl and jellyfin? Perfectly fine.
Depends. There are a few things I update by hand, but as long as you have proper backups it’s generally safer to run the latest versions of things automatically if you don’t mind the possibility of breakage (which is very rare in my experience). This is in the context of self-hosting of course, not a business environment.
All of my self-hosted systems are on a TrueNAS system and using the built-in app system (basically docker). It notifies me when they’re needing updates, and has a single click update process for everything. I just login weekly to see if the button is yellow, then check on it like 15 minutes later to see if anything failed to update. Yeah they’re all on the same hardware, which is probably bad, but nothing there is strictly necessary, it’s all just media stuff and for fun.
The one service that is separate is Pangolin on a DigitalOcean droplet. I just handle that manually when it says there’s an update. Still effectively just docker, but no easy button.
I could automate these more, but I would spend more time setting it up than I would save since it only takes me a couple minutes maybe once a week.
Just make sure that you can access through a vpn.
https://en.wikipedia.org/wiki/Puppet_(software)
Information about similar tools is available around https://en.wikipedia.org/wiki/Infrastructure_as_code#Tools
FluxCD and renovate working together.
Portainer for container images
Bash script for everything else.
I don’t use docker, etc, so for me, if it’s in the normal Arch repos or AUR then I don’t need to think about it until there’s a
.pacnewfile to look atThen, it’s just the odd git pull on literally 2 devices.
All organised by ansible…
(well except the
.pacnew, but I think it’s nice to keep in touch with the packages)I have a shell script that handles all the quircks. I run it every few weeks. It does a btrfs snapshot so I can go back in case something is wrong, and after it updates Docker and Podman to the latest label.
For services not containized I have some automation to fetch the last version from internet (for example some home assistant addons that are just js files).
For the updates that are more difficult to script (or just not worth because they are very infrequent) I have a script that compares the running version with what published on their website and warns me I have a manual update.
Since most of the projecs I host have a gitub page it is relatively simple to write reusable code to do this stuff.
In general I don’t trust automatic updates, there are seldom issues but they can be annoying to fix. So I just prefer to updates by hand whenever I have a few minutes free and I know I have direct access to the server in case the connection drops.
All my services run in podman containers managed by systemd (using quadlets). They usually point to the :latest tag and I’ve configured the units to pull on start when there is a new version in my repository. Since I’m using opensuse microos, my server (and thus all services) restart regularly.
For the units that are configured differently, I update the versions in their respective ansible playbooks and redeploy (though I guess I could optimize this a bit, I’ve only scratched the surface of ansible).
I keep it simple, although reading down through the thread, there are some really nice and ingenious ways people accomplish about the same thing, which is totally awesome. I use a WatchTower fork and run it with
–run-once --cleanup. I do this when I feel comfortable that all the early adopters have done all the beta testing for me. Thanks early adopters. So, about 1 a month or so, I update 70 Docker containers. As far as OS updates, I usually hit those when they deploy. I’m running Ubuntu Jammy, so not a lot of breaking changes in updates. I don’t have public facing services, and I am the only user on my network, so I don’t really have to worry too much about that aspect.Kubernetes + helm charts
I do it manually. update the container version and docker pull and run
I have reduced the number of containers to ones i actually use, so it is manageable.
i use v2 instead of v2.1.0 docker container tags if the provider don’t make too many bleeding edge changes between updates
I run NixOS. Go to the flake file and update channel version.
I just run watchtower in docker. It will watch all your other docker images and update them to latest version automatically if you want.
It works fine but with time, I stopped thinking i need to be on latest version all the time. It really isnt very important.
Just a few of my services are open on the internet, mainly caddy and wireguard.
Heads up that watchtower is no longer maintained. I haven’t yet looked into forks or alternatives.
Wow, that sounds like a nightmare. Here’s my workflow:
That gives me an atomic, rollbackable update of every service running on the machine.
unattended-upgradesI run most of my services in containers with Podman Quadlets. One of them is Forgejo on which I have repos for all my quadlet (systemd) files and use renovate to update the image tags. Renovate creates PRs and can also show you release notes for the image it wants you to update to.
I currently check the PRs manually as well as pulling the latest git commits on my server. But this could also be further automated to one’s liking.
Fine, I’ll be the low bar.
Proxmox, I just use the GUI to update
I use community-scripts almost exclusively. Community-scripts cron lxc updater does the heavy lifting.
pct enter [lxc]updatedoes a bunch of work too.
For Docker, I use a couple lxcs with Dockge on it, the “update” button takes me most of the rest of the way.
Finally, I have a couple remote machines [diet-pi]. I haven’t figured out updating over tailscale yet, so I just go round semi frequently for the
apt update && apt upgrade -yVMs get the
apt update && apt upgrade -ytoo. I keep a bare bones mint VM as a virtual laptop, as I don’t have one. I’ll do what I need to do and if I had to install software I’ll just nuke the VM and go again from the bare bones template.