Jellyfin / Paperless-ngx on Raspberry Pi 4?
from frosch@sh.itjust.works to selfhosted@lemmy.world on 14 May 18:45
https://sh.itjust.works/post/60171250
from frosch@sh.itjust.works to selfhosted@lemmy.world on 14 May 18:45
https://sh.itjust.works/post/60171250
Does anyone run one of the above on a Pi 4 and can share their experience how good or bad they run?
If course, transcoding won’t be any good and OCR probably cannot run in parallel, but aside from that - is it okay?
Currently running everything on a mini ITX with a i5-6600 which handles this easily for my small use cases, but also draws 20-30W idling most of the day… I’m eyeing a Pi 4b with 8gb RAM but don’t want to spend the money and then realizing that it doesn’t run smooth enough
#selfhosted
threaded - newest
I run Jellyfin on a Banana Pi M4 Zero. It’s a little less capable than the Pi4 but runs JF just fine. Specs on this one are quad core 1.5 GHz, 4 GB RAM, 32 GB eMMC on Armbian.
The media files are all on the 1 TB SD card while the Jellyfin data directory (especially the SQLite DB) are on the eMMC. This seems to work much better as the DB file kept getting corrupted on SD. Should also help the SD card from wearing out since it’s pretty much only reading data from it most of the time.
As you guessed, transcoding is not going to work (JF is removing the v4l2 hardware support anyway), so I pre-transcode them to H264 + yuv420p in an mp4 container before moving them to the SD card. I also scale them down to 720p to fit more on there, but that’s because this is a travel server and isn’t my main media source.
Can’t speak for Paperless though.
Travel server? Now that’s an idea
A Pi as a travel server sounds like an awesome idea!
What are the use cases for taking it with you instead of just connecting to your homelab?
Edit: Your username made me thirsty, too bad I have to wait some hundred years to get one on earth…
Thanks!
I built it just to see how much I could cram onto a Pi Zero clone/how many self-hosted services I could have on something I can fit on my heychain, and the answer was “a lot”. It’s something of a travel server, travel router, emergency server, etc.
I mainly just wanted a subset of my homelab services available in something I could take with me anywhere. Home lab could go down while away, power could go out, something to use while glamping, can take it with me if there’s ever an emergency where I have to evacuate, etc.
Services
Networking
File Services
I’ve got a second unit that connects as a client to the main one with some additional backup services:
The second one is basically a backup to my main stack in case of disaster/power outage/etc. Those all tunnel to a cloud VPS + load balancer and only need an internet connection to setup the tunnels to receive traffic from the VPS (and route back out to it). Those services are stopped and a cron task keeps them in sync with the main ones in my homelab. If I need to fail over, I just SSH into the VPS and re-route traffic to them instead of my homelab endpoints.
I self-host my own email and chat and phone services, so those have become critical services I want to always have online. Essentially these little Pi clones are a backup stack for my most used services and one that is both extremely low power and portable should I ever need to host them on the go (house burns down, have to evacuate due to emergency, etc).
Photo
I still don’t have a “full” case for it, but here is the core unit attached to a UPS circuit which gives it up to about 14 hours of runtime. I’m also planning to add a small USB hub with ethernet into that, but I’m still learning FreeCAD so I’m not quite ready to put it all together yet. Main Unit: <img alt="" src="https://startrek.website/pictrs/image/08c4110a-c839-4c09-b3d6-d754527a7e6b.webp"> Secondary Unit: This is an older photo and is also connected to my Bose radio acting as a Snapcast client to the server on the main unit. <img alt="" src="https://startrek.website/pictrs/image/108153e8-82c4-41aa-9f64-9161894e2386.webp">
I tried paperless years ago on my 4B and it did not work well enough to be usable.
Jellyfin was fine.
I’d say getting an x86 think centre or equivalent will cut your idle in half and give you enough overhead to run paperless and jellyfin.
I run a 2019 Dell OptiPlex SFF desktop as my ESXi box - it idles under 20w with multiple Linux and Windows VM’s (4 are standard, besides the ad-hoc ones for testing stuff).
Hard to beat the idle combined with performance when needed. Pi really doesn’t compare.
Was it the overall performance or the OCR specifically?
I have run paperless some time on a pi 3b without OCR (manually doing it on a PC or when scanning with Apps like MakeACopy) and it was okay-ish. Not a lot documents though.
And I thought it was mainly the 1GB RAM limiting (starting Paperless started swapping right away…)
Consider papra as a more lightweight alternative for paperless-ngx. I have not used it yet unfortunately.
Gotta take a look at it and spin one container up. Looks promising, too!
As I’m not completely invested in paperless, I’ll definitely try it out, thanks!
Haven’t seen support for exports/backups at first glance, that is imo a must have.
I tinkered for years with numerous Raspberry Pies but got tired of it. Bought a second hand Dell ThinClient Wyse 5070 for like 70€, installed DietPi and its awesome.
Zu Paperless-ngx auf dem Pi 4: Für leichte private Dokumentenmengen kann das funktionieren, aber OCR ist der Teil, der schnell zäh wird. Ich würde die Worker klein halten und OCR/Tika/Gotenberg eher auslagern, wenn regelmäßig größere PDFs reinkommen. Paperless selbst ist meist weniger das Problem als OCR + I/O; Jellyfin parallel ist ohne Transcoding deutlich entspannter.
Als Privatnutzer bekommt man ja eher seltener gigantische Mengen an Papier per Post. Das sind dann maximal ein paar Briefe pro Woche und dann ist es auch total egal, ob die OCR 10 Minuten pro Brief braucht. Und wenn du als Firma wirklich noch eine größere Menge an Briefpost zu verwalten hast, dann ist ein Raspi definitiv die falsche Wahl
Ja, weniger Worker, dann dauert das OCR eben Mal länger. Da das eher selten ich muss das Dokument sofort im Paperless anschauen, kann das meinetwegen auch seine Minuten brauchen.
Bin bisher auch noch nicht 100% auf den Workflow eingeschossen:
I have a Pi 4B with 8 GB RAM and run Jellyfin plus some other stuff on it. Works great.
Had it first installed via repo but transcoding did not work at all. After switching to the Docker setup though, transcoding worked ok out of the box. Definitely takes a few seconds before the stream starts when having to transcode but no hiccups afterwards. Unless you jump around of course, and also I never had more than one stream trandcoding.