Proxmox Plex Hardware Acceleration
from modeh@piefed.social to selfhosted@lemmy.world on 16 Sep 14:20
https://piefed.social/post/1269275

I recently upgraded my setup from an RPi running DietPi to a Beelink 14 (N150) running Proxmox. So far it’s been fun screwing around with it, creating VMs and LXCs, and getting to learn the ways of Proxmox.

My latest obstacle, however, was migrating my Plex setup from the RPi to the Beelink, I have created an unprivileged LXC and setup Plex manually. I know there is a Community Helper Script for it but where is the fun in that.

Anyway, I am trying to enable HW acceleration and can’t seem to passthrough the GPU drivers to the LXC without breaking things (thankfully I have a backup that I always restore to once things break).

I looked up tutorials online that might help but I can’t seem to find anything applicable, mostly people suggest to just use the Community Helper Script and get it over with. There isn’t much I can learn doing it the easy way.

Can anyone suggest to me how to go ahead with this or at least point me in the right direction?

Thank you.

#selfhosted

threaded - newest

frongt@lemmy.zip on 16 Sep 14:49 next collapse

You could read the script and see what it does.

modeh@piefed.social on 16 Sep 16:08 collapse

First thing I tried doing, it has the relevant parts for setting up the GPU in case the LXC was privileged, but nowhere do I see how it sets it up in case the LXC was unprivileged.

UnpledgedCatnapTipper@piefed.blahaj.zone on 16 Sep 15:06 next collapse

It should be the same process as for Jellyfin, aside from the steps to install or change settings in Jellyfin itself.

https://www.reddit.com/r/Proxmox/comments/1c9ilp7/proxmox_gpu_passthrough_for_jellyfin_lxc_with/

modeh@piefed.social on 16 Sep 16:14 collapse

Will give this a look. Thank you.

UnpledgedCatnapTipper@piefed.blahaj.zone on 16 Sep 16:54 collapse

Good luck! I struggled immensely with getting it to work in an unprivileged container, especially the bind mounts and their permissions for my media file shares. I ended up giving up and running Jellyfin in a privileged container after a few days of fighting with it.

Shadow@lemmy.ca on 16 Sep 15:17 next collapse

FWIW I did this with jellyfin and ended up just using a vm instead of lxc. This way I could just pass the entire device through, not have to mess with drivers in my proxmox host, and not have to reboot all my vms/lxc just to apply updates.

modeh@piefed.social on 16 Sep 16:14 collapse

I can do that no issue, simply thought it could be a good learning experience to use LXCs as I have never used them before.

rumba@lemmy.zip on 17 Sep 10:28 collapse

A wprthy cause, but there’s no end of other things to host in LXC. It’s possible, but unpleasant and can be brittle for updates.

SanguineBrah@lemmy.sdf.org on 17 Sep 10:02 next collapse

I have an N150 proxmox setup as well. I had to enable iommu in the kernel to get pci-e pass through working (intel_iommu=on).

non_burglar@lemmy.world on 17 Sep 14:15 collapse
  1. Nesting=1. This isn’t about virtualizing inside the container, it allows internal resources to access parent resources.

  2. You should only need the cgroup2 entries, but they should be pointing to the correct devices:

  • cgroup2 entries to allow rwm access to the correct device
  • /dev/dri dir and file entries that specify bind,optional,create

Nvidia example, but quicksync is similar:

lxc.cgroup2.devices.allow: c 226:0 rwm
lxc.cgroup2.devices.allow: c 226:128 rwm
lxc.mount.entry: /dev/dri dev/dri none bind,optional,create=dir
lxc.mount.entry: /dev/dri/renderD128 dev/renderD128 none bind,optional,create=file