Selfhosting Sunday! What's up?
from tofu@lemmy.nocturnal.garden to selfhosted@lemmy.world on 19 Oct 15:25
https://lemmy.nocturnal.garden/post/309249
from tofu@lemmy.nocturnal.garden to selfhosted@lemmy.world on 19 Oct 15:25
https://lemmy.nocturnal.garden/post/309249
What’s happening on your servers? Any interesting news things you tried?
I didn’t do anyone other than updating Mastodon (native deployment) lately due to a lack of time. Reading so much about Immich caused me to consider trying it in parallel to Nextcloud but I’m not sure if I want to have everything twice.
Not quite homelab, but I’m about to install Linux Mint on my mom’s laptop and that had me thinking about creating an off-site backup in her place again since she has a fiber connection. I’m still not sure about the potential design though, but currently my only backup is in the same rack as the live stuff.
#selfhosted
threaded - newest
I installed gitlab on mine. Time to organize my projects!
Cool, have fun! Any particular reason for Gitlab over other forges?
…. I didn’t consider that there might be alternatives lol. :p but it is what we’re use at work, and I’m just a dumb cat with some servers! :3
I just heard it’s a bit more of a hassle to set up than like gutes/forgejo, but if you already got that, enjoy!
Similar, I just installed Forgejo and I’m digging having my dotfiles local rather on GitHub.
Yeah! I’m excited to be able to easily sync stuff over my several computers!
I’ve had immich but went to homegalley instead. Mostly because I want to keep MY directory structure in case I’m abandoning the choosen platform. Have not regretted my choice (so far … 8 months)
I’ve been using Immich, but with my photos as external media. That lets me keep my directory structure too, but with the Immich features 🙂
You can adjust the directory structure in immich using templates
I’ve not been able to make it work reliably with photos backed up using immich on my android phone, is if working for you? I read somewhere storage templates are not very robust/reliable.
Seems to be working fine for me but i don’t do anything complex, just folders by year and month
Same, using the default storage template.
I’ve set up Uptime Kuma this weekend, monitoring everything from Docker containers, network devices (like IPcams, switches, printers, …), wireguard tunnels, etc etc. (I have 65 monitors set up so far) and a Signal rest api for notifications.
Furthermore, I integrated multiple new ESPHome switches into my Home Assistant setup for cable model reset, alarm system controller reset, etc.
Once I have Uptime Kuma finetuned I will automated som resets.
Uptime Kuma is amazing so far.
Pretty cool! I’m using Prometheus but I alert over Matrix. Do you have a specific Signal bot account or are you using your normal one and send to yourself?
I actually registered my unused landline for the signal rest api account a long time ago, been using that one for all kinds of automated notifications for over a year.
Oh that’s an amazing idea! I need to check if I still have a landline number and if I can answer calls to it somehow…
this might be my next project. I need uptime management for my services, my VPN likes to randomly kill itself.
Gitlab and Nextcloud broke (cuz I ctrl+c’d the pacman hooks, oops), but some manual DB upgrades and rebooting fixed that. However, I can’t login into my synapse from anywhere, and can only use it with existing sessions for some reason.
Also, there’s searxng.30p87.de now :3
Building out ansible.
Now it’s creating roles and groups, adding a few items to the hardening playbook, and I’ve been playing with tuning the output as playbooks run.
I have been looking for something new.
Last week was moving Immich up to the new release I was on an old version, which meant migrating to an intermediate version to allow a database rebuild. It worked well.
I was bored this week so just ran some wattage testing.
What kind of hardware is it running on?
It’s an Intel i5-7700 cpu in a Gigabyte Z270N mobo. Those were chosen as a form factor fit for the Monsterlabo fanless case. (Only a select set of boards, and in this case 1151 brackets, fit the case)
Self hosting wise, not much, just ran through updates (I prefer to do this manually) and set up a new box which will host another proxmost host and NAS.
The mobo/CPU that became the new server has been replaced with an Asus prime x370-pro and a spare 1700x to be used as a new endeavoros desktop (their defaults are close enough to what I want I dont bother with full manual install). Mostly need it for a KDE 6 box for dev/testing to go alongside the instances of Trixie/Sid, since I’m considering arch for some work stuff that Debian won’t fit the bill for.
Not much with the server, as I’m finally finishing my switch to Mint on my main PC, now that I’ve finished the things I was stuck with Windows for.
I’m debating whether to put Calibre Web on my PC or media server, as the PC is easier to access, but the server is always on.
I’m also trying to figure out the best way to host a family Minecraft server. I’ve currently got two running at home and one remotely, but have managed to get a decent free tier Oracle server running too.
One of the Minecraft servers is staying local, as it’s just for the immediate family for our gaming sessions, but the other is for the kid’s cousins to join in too. Typically though, they haven’t wanted to play since I got the servers running, so I can’t tell which is best for them 🙈
If you want to use Calibre Web, should be accessing the server as easy as having it running on the desktop? Also has the benefit of downloading books directly to the e-reader without needing your desktop.
Also check Calibre Web Automated! It’s a fork with lots of additional features.
+1 for CWA
I’ll have a look at Calibre Web Automated, thanks :)
I’ve set up wireless access to my Windows Calibre library but I thought the device needed to be physically connected first? I’m realising that I might be thinking of a different program though.
Not every device supports it, but you can download books via OPDS, which Calibre supports natively. I run KOReader on my Tolino and it works great with it
That’s how I’ve currently got my phone and Fire tablet working, KOReader to Calibre over wifi, it’s the initial setup that I couldn’t remember.
I meant to check last night, but didn’t get a chance. More playing and tweaking needed tonight 😁
I also switched to that, used sync thing before. Super glad I can still use my old Tolino despite the broken USB port (charging works)
Another +1 for CWA. I tried several solutions and it was the right one for me.
I use pelican (wings and panel) to manage my game servers. I use separate proxmox lxcs and so far it’s been nice and simple to partition the resources, ports, create backups, wtv
I’ll have a look at that, thanks :)
I’ve been trying to convince a VPS to run two instances of mariadb - one for local databases, one to replicate the homelab. Got mariadb@server and mariadb@replica sorted out through systemd, but now stuck on replication from mysql to mariadb. Looks like I’ll be ripping out mariadb and putting everything on mysql.
Have you checked if statement-based replication works from mysql to mariadb?
I’m hung up on unrecognized charset #255. Tried rolling everything back to utfmb3; suppose I could go all the way to Latin1. I imagine there’s a lot of depth I could learn, but dropping mariadb for mysql seems like the path of least resistance right now.
eta: got the character set sorted. Had to make a new dump, confirm that everything in the dump was utf8mb3, then re-prime the replica with that data. Wasn’t enough just to change the character sets internally.
So it works now! Good job
Interesting using systemd for that, I’d probably have chosen containers for that.
What’s the reason for replication vs. dumps? Does the client failover to the replica?
I’m not a systemd guru, but it turned out pretty easy. dev.mysql.com/doc/refman/…/using-systemd.html#sys… Basically just make
[mysqld@copy]
sections in my.cnf thensystemd start mysqld@copy
and systemd is smart enough to passcopy
into mysql.I did it slightly different, using
systemctl edit mysql@.service
to define different default files for each instance, then[mysqld@copy]
sections in each of those files. Seems like theport
option for each has to go in a[mysqld]
section, but otherwise ok.Replication because I want to put some live data, read-only, on the VPS, exposed to the world while the ‘real’ database stays safely hidden in my intranet. SSH tunnel so the replica can talk to the real database.
Finally finished setting up and testing a Peertube instance. The video stuff and object storage related things certainly make it more involved than other fediverse software, but overall it is working quite nicely. Just need to find some workable solution to using GPU acceleration in containers, but I think I mostly figured it out (might work after a server restart, but my sweet, sweet uptime makes me procrastinate on that 😅 ).
How much storage do you think you’ll need with caching external content? Does Peertube even do that?
Not automatically, but you can configure it to mirror certain video channels or individual videos. But I have not looked into that too much yet.
As for storage: a typical video you would find on such a platform with the different stored video resolutions and so on will take between 0.5 and 3 GB… depending on the length and how well it compresses.
I’ve learned a hard lesson this week. Jellyfin server OS partition run out of free space and corrupted the database. Nothing to do but reinstall. I guess this week I’ll be reviewing backups! 🤣🤣🤣
I don’t like the sound of that. Sounds like bad programming? Who’s at fault? Jellyfin or the database implementation? Why would a nospace error corrupt everything. Sounds absolutely volatile. 😱
They just made a blog post about the next version fixing a long standing issue with their database management. Should probably improve in the near future.
Yikes. Well that’s good, at least. Progress is good.
Watched status of the entire library was lost though right? Or no?
FYI from the newest release notes for 10.11.0
Jellyfin now actively checks the available free space for its configuration and data directories. If you have less than 2GB of free space in each data directory, Jellyfin now refuses to start to prevent data corruption. Additionally, checks are implemented to prevent certain path misconfigurations that are known to cause issues.
jellyfin.org/posts/jellyfin-release-10.11.0/
I finally got my ISP to enable bridge mode on my modem.
I also learned that I didn’t lose port forwarding and related services because I had been moved behind CGNAT or transitioned to IPv6 – they simply no longer offer port forwarding to residential customers. Ruminate on the implications of that statement so I’m not the only one with blood pressure in the high hundreds.
Port forwarding is done at the router/firewall, so if ports can’t be transferred its a cgnat thing they are doing. Like a Non CGNAT IP on the internet can be sent a packet on any port.
No, I got it from the horse’s mouth: my WAN address was publicly routable all along, the ISP just disabled those NAT-related features remotely.
Oh shit, that’s terrible.
the implication of that is weird to me. I’m not saying that the horse is wrong, but thats such a non-standard solution. That’s implementing a CGNAT restriction without the benefits of CGNAT. They would need to only allow internal to external connections unless the connection was already established. How does standard communication still function if it was that way, I know that would break protocols like basic UDP, since that uses a fire and forget without internal prompting.
It’s perfectly reasonable from the perspective of corporate scum: take away a standard feature, then sell it back as an extra. As far as I know, the modem still had UPnP for applications that rely on it.
My ISP did the same thing recently and what was most annoying is they didn’t admit to changing anything, while trying to sell me a business account.
This weekend I setup Pangolin on a budget VPS and forwarded it back home. I don’t have my VPN backup but it fixed Plex and I can access my security cameras again.
I threw a thinkcenter in my laundry room and did the bare minimum to securely SSH into it (fail2ban, nonstandard port, root login disabled, can't login with password, etc), to be used as a testing platform for building my workplace a new website.
Just gotta relearn HTML/CSS and figure out what platform to use.
Also set up traefik/Authelia/maybe Anubis for the new domain and block any access outside of my home or workplace.
I actually just wrote about today’s fun experience!
https://gotosocial.michaeldileo.org/@mdileo/statuses/01K7YKQ9584YBY1QTYQ8RMW7SS
At this point my whole setup is mostly in maintenance mode - I’ve got everything I need up and running, making some minor changes here and there (like swapping out StirlingPDF for Bento), and keeping things up to date. I only started this hobby about 6 months ago or so, and I’m really satisfied with where things are at. We’ll see when the next Big New Thing arrives.
I finally got my home services covered with my website’s wildcard ssl. Which is great, because now I can setup ELK Stack and setup an auth portal on my vps, and get Plex and gitlab out of the house securely.
Love the post haha! Nothing much here things run rather stable and with low maintance right now.
I’m super glad I arrived this state and don’t have to do anything mostly. Just when I want to change stuff :)
I mean I still do from time to time. Breaking changes require some attention and migrations. But overall its good and not a load of daily maintance.
I’ve set up Kavita for my e-books. Nice UI, looks promising, and I’ve added some books. I haven’t really used it yet, because half of this was just an excuse to try podman (instead of docker). I wanted to set it up to run as unprivileged user, without the docker daemon running as root. That wasn’t too hard, but it was definitely a few extra steps.
But something about Kavita didn’t sit well with me. Maybe I don’t self-host enough stuff to know what’s normal, but there is a donate button, which I don’t mind, but its tooltip says: “You can remove this button by subscribing to Kavita+.”
I’m donating to a few software projects already, and I have developed a substantial amount of free software myself. There is nothing wrong with asking for money. But what I cannot stand is when software running on my own device is intentionally acting against my interests. And this tooltip was very clear about not letting me do something that I might want to do.
So I checked the source code for more. I found another anti-pattern: telemetry is opt-out instead of opt-in. But that seems to be it, I didn’t find anything worse than that. So… fair I guess, if the author wants it that way. It’s still free software. It looks like I could delete all the Kavita+ stuff myself and re-build. Which I’m going to do if I keep using it. But this is now an extra step that prevents me from just using it, because I need to feel in control of what I run. Kind of self-inflicted, I guess…
This looks cool, but I really wouldn’t like having the donate button right in front of me
I’ve been running Kavita for a year and a half +, and honestly cannot tell where the donate button is, other than going into the settings and clicking the “kavita+” selection. Maybe I’m oblivious. Can you share what you’re seeing? As well with the telemetry option?
Telemetry is in Server -> General -> Allow Anonymous Usage Collection. When you opt-out, it also send a final message to the server that you’ve opted out. The the telemetry itself looks reasonable, I don’t mind sending it. It’s really just the dark pattern of opt-out vs of opt-in that bothers me.
The donate button is the heart in the bottom left menu (not visible in the settings). It’s unobtrusive. I wouldn’t bother to remove it, except the tooltip says that I have to pay to remove it - now it has to go. Asking for donations is fine, but asking for money to remove a button is disgusting.
Thanks!
Telemetry: I was able to find it, but it was already disabled. Maybe i noticed and unchecked it when I initially setup.
Donate button: Ah, I see where you mean. Interestingly I do not see it when accessing from my mobile device, either as a mobile site or requesting a desktop site. But when accessing it from a desktop browser I do see it in the bottom left.
A quick test shows ublock origin can block the element from showing. I believe that even if the user donates, it is not sufficient to hide this button, and the user must opt to pay for Kavita+ which is a subscription, not a one time license/etc, and forgoing it may lock other features a user is interested in.
wiki.kavitareader.com/donating/ wiki.kavitareader.com/kavita+/
If you reach the point of looking for a different solution, check out Calibre Automated. I tried several different things and this was the best one for me.
I’ve been making another attempt to replace Docker with Podman. The issue is I can’t connect to my server through a web browser. I think it’s a firewall issue.
Networking and networking troubleshooting is a bit confusing for me and that’s the least favourite part about self hosting for me. Turns out I actually enjoy writing scripts more and the challenge of writing POSIX scripts especially.
If I can figure it out, I’ll probably write a guide for setting up Podman and Caddy on Alpine Linux since there isn’t a lot of recent information out there from what I found in my searches so far.
Did the switch from Docker to Podman a couple of months ago. Now I host all my services (arr-stack, Forgejo, Nextcloud, Authelia, Traefik, Immich… to name a few) on my VPS and mini pc/home server with Podman.
I recently sat up headscale to connect my VPS running the Traefik Proxy to my home lab to make some of my services running on there accessible from the internet. It was quite the journey, to say the least, as networking is not my forte either.
But feel free to drop me a pm if you need some inspiration or support, maybe I can help.
Thank you for the offer. I still need a bit more more time to experiment and zero in on the issue again. Fortunately my setup is quite simple and the only bottleneck will be Caddy.
I basically run Caddy which redirects to a static generated blog, simple file server page and a Kiwix instance. I’m mostly making a self hosted reference site of materials for Linux and Scripting resources.
One day I may add a Forgeo instance but currently my entire workflow exists around rsync. I’m happy just having my single file scripts hosted as text files and don’t really need the power of git. At least not at the moment.
Good luck 🫡 I made the switch about half a year ago and went all in on rootless quadlets while I was at it. It was a pretty nightmarish couple weeks figuring out things like user id mappings and rootless permissions, but I got there eventually. Landed on a super neat Traefik config that should work for anyone and makes spinning up new quadlets with their own reverse proxied subdomains really simple. I should really post it somewhere…
In the end I wouldn’t exactly say it was worth it… but it sure feels cool to be fully moved into a more open/native container implementation.
Yeah, I mainly just want to move away to more open projects. When I first started, everyone kept suggesting using Cloudflare. After half a year using their service, I just felt icky the entire time.
In the past couple months I was able to move away and chose to protect myself by learning how to harden my server as well as hiding my server behind multiple layers of obscurity.
With my current setup, the only site traffic I get has only been myself and my custom ssh port only gets hit by bots about 3-10 times a week according to my logs. Only time will tell how effective my layers of obscurity will hold up but so far it seems to satisfy my needs better than I was expecting.
Once I get podman in a state I like, I’ll pretty much be all open sourced and all I’ll have to do for myself is be in maintenance mode unless I care to add a new service. I like to keep things simple so I don’t normally go crazy adding new services anyways.
Rootless podman cannot bind ports <1024, only root can by default (on pretty much any distro I guess). Have you done something like
sysctl net.ipv4.ip_unprivileged_port_start=80
to allow non-root processes to bind to port numbers >=80?I’ve read about that and I already have that in my notes as well.
It doesn’t really affect my needs because my ISP blocks incoming on those ports anyways. Also I’m choosing not to use a tunnel at the moment so I’ll be using a higher port anyways.
The last time I asked about it, a few people seemed to agree it was something to do with the firewall settings. That seems most likely since I was able to connect when I disabled my firewall. I’m not a fan of working with iptables. The language for that type of networking is gibberish to me.
I had also tried going from docker compose to rootful podman compose and ran into the same issue. Although I’m trying to work away from podman compose in the future, just taking it in steps.
I actually did something for quite a while. Finished long overdue wiring for outdoor access point and one more camera, replaced a main switch since the old one started to behave unreliably, installed frigate (which still needs some work), cleaned up some wiring while messing around, updated a bunch of firmwares, replaced switch in garage to managed one and made some changes on my workstation and some other minor stuff.
Next would be to move cameras into their own VLAN and harden that setup a bit. And I really should get around on better backups for my VPS. But it’s a new week coming up, if the work isn’t too busy I might get something more done.
Currently working on a networking problem. I have multiple Proton VPN connections on my Mikrotik router. Main reason being for fail over in case one endpoint reaches capacity, goes unresponsive, etc.
It’s a bit tricky since Proton issues the same peer and gateway IP for each connection. Haven’t quite got it working the way I want it to yet.
CLOUDFLARE IS NO MORE FOR MY NETWORK
Soon I’ll drop Cloudflare for my public services too
Hooray!
What are you moving to?
Anubis, though I always had it before I removed Cloudflare.
I did have troubles passing the Anubis check from time to time. It does not offer an alternative way to prove you’re not a bot and locks you out of the website completely.
Updated several Syncthing Server and one of them that comes as Yunohost Package got an new ID.
So that little thingy flooded all my other Syncthing Servers with Sharing requests… Its pretty anoying and surely its that one that serves the outside Backup…
Installed qbittorrent and downloaded a few seasons of Linux isos onto a vps. Discovered accessing those files over SSH to be too slow to play them without buffering so installed filebrowser to get them via http which worked well.
It’s been a long long time since I used bittorrent and wow it works so much better these days.
I got tailscale cert to work but I feel kind of bad about learning tailscale instead of headscale
I was going to read into these. What benefits do you see in headscale?
Mainly that they can’t enshittify because they’re already open. Tailscale is great right now, and free, but who knows in 5 years
I run headscale on my VPS. The tailscale clients are already open source, though by default they connect to the companies servers for coordinating the net. Headscale is open source and replaces the companies servers with your own. Best to not rely on some corporate service, which could cease to exist or be enshittiefied.
Have you looked into netbird? I have been thinking of setting that up over tailscale
Working on getting bazarr to work with Plex, turns out it still requires radarr/sonarr even if I don’t sail the seven seas. Guess I’ll be learning the entire stack tonight :)
I’ve been deploying Gitea (or Forgejo, still can’t decide), but I’ve fallen into the Ansible rabbit hole and can’t get out. Also learned Terraform in the last week and I’m still on the fence about using it in my homelab. It’s nice for the cloud but I don’t think it’s as useful on-prem.
Forgejo has everything Gitea has, with more and being more open
My concern when it forked was that forgejo would last a few months and then fizzle out.
That doesn’t seem to be the case.
Yeah, I evaluated my position since and now I’m trying to deploy Forgejo, but I’m still stuck in the IaC rabbit hole and can’t crawl out
Set up Zipline to share bigger files with my friends.
I have noticed that Microsoft and google are trying to scan my domain for /php-myadmin and similar links that I thankfully do not have.
I had already fail2ban running but it failed to ban a single IP. I did setup custom filters that would ban admin panel scanning attempts but somehow now it also bans my home IP and my phone 5G ip sometimes. No idea how to fix it so far. Also, this filter/jail doesnt necessarily jail everyone attempting to reach these links, just sometimes it does.
I’ll have to look at my fail2ban logs and see if I’m having similar issues.
It should be possible to mod your jail to whitelist an IP range on your local Network.
I’m doing that on one of my jails.
Good catch. My IP is dynamic. I’ll look into it, thanks!
Finally managed to carve out some time since the birth of my daughter two months ago to tinker around a bit. Decided to tackle my gripe to semi-automate updating my services when there is a new release.
Now I have Renovate running on my self-hosted Forgejo instance using Forgejo’s actions and a “Podman in Podman” image for its runners. Don’t ask me why I wanted to do a PINP instead of DIND - I guess I like to punish myself. But at least this means everything I deploy is running with Podman 😄
A self hosting thing that I did after having a kid that’s helped us tremendously is hook up an internal camera to frigate to use as a baby monitor, and then have automations in home assistant to automatically change which parent gets notified about crying in the middle of the night based on an agreed-upon “shift”. Just a thought to consider :)
I love the idea! I was actually thinking about building something like a baby monitor with cameras instead of just buying one, so your comment further inspires me to follow up on that. May I ask what camera you were using?
I think it was an older model of this one, but I’m not sure. Just a random amcrest I had lying around.
It’s also worth pointing out that there are a few self-hosted solutions actually meant to act as baby monitors doing stuff like sleep/wake differentiation. I just had trouble getting one of them going and just thought screw it I’ll just use frigate and noise levels to detect crying sounds since he was older and hardier.
I installed immich and began migrating our phones away from Google.
I am playing around with Podman Quadlet and that’s one hell of a rabbit hole. I have everything up and running, and now I need to configure the containers, and probably will deal with other pain points, etc.
The good thing is that I have documented the whole process so it is reproducible but it took me quite some time to figure out everything.
Would you mind sharing your process in a write up?
I will definitely do that, I just want to finish the whole setup.
almost done re setting everything up after a catastrophic failure (ended up replacing multiple drives, the CPU, the motherboard, the psu, and the ram).
now I’m just running long command after long command, waiting for drives to zero, ensuring extended smart checks pass on new drives, cloning to my backup drives…
this things been down for a few weeks and I’m so excited to have it back up soon!
anyways, moral of the story is, the 3-2-1 strategy is a good strategy for a lot of reasons. just do it, it may save your ass down the line.
Working on setup reserve proxy properly. With all this research and testing, im going to be ans expert in the area, just to never speak about to another human being… except on and another post
Updated to OpenSuSE Leap 16.0 with the autotool and it broke some things, but nothing terrible. Had to fix network config and add back Packman for ffmpeg for Jellyfin to work but that was about it
So, serious question, should I self-host my servers in AWS?
Why would you?
I migrated iptimr-kuma to the new v2.0 release. The DB migration took a long time. I learned I probably should have run the vacuum command before the migration, but I never noticed the button in the settings before.
Also preparing Jellyfin for its new 10.11.0 which comes with another long running DB migration.