What type of computer setup would one need to run ai locally?
from Grumpy404@piefed.zip to selfhosted@lemmy.world on 13 Feb 17:21
https://piefed.zip/c/selfhosted/p/1079806/what-type-of-computer-setup-would-one-need-to-run-ai-locally
from Grumpy404@piefed.zip to selfhosted@lemmy.world on 13 Feb 17:21
https://piefed.zip/c/selfhosted/p/1079806/what-type-of-computer-setup-would-one-need-to-run-ai-locally
Not sure if this goes here or if this post will be hated upon? but i want to host ai like llms and comfyuis newer models locally but im not sure what type of setup or parts would work best on a possible slim budget? im not sure either if now is the time with inflation and such.
I dont have a price in mind yet but im wondering how much it would cost or what parts i may need?
If you have any questions or concerns please leave a comment.
#selfhosted
threaded - newest
AI said:
Hope that helps /s
I wonder if it took into account when generating the price estimated, all the hikes in RAM pricing that it itself is causing…🤔
Stupid fucking AI data centers…
It really comes down to what kind of speed you want. You can run some LLMs on older hardware “just fine” and many models without a dedicated GPU. The problem is that the time taken to generate responses gets to be crazy.
I ran DeepSeek on an old R410 for shits and giggles a while back, and it worked. It just took multiple minutes to actually give me a complete response.
Depends on how fast you want it to run. A Raspberry Pi with an AI hat runs well enough.
What’s an ai hat? Like a red hat? Or a fedora?
Hats are little modules you can stick on your pi for extra functionality!
And they probably do have a Fedora hat…
Crazy! I thought that’s a joke. Thanks!
A lot of expansions of the Pi are called hats from some reason.
www.raspberrypi.com/products/ai-hat/
How’s the performance on something like this?
I’m running gpt-oss20b fine on my m3 MacMini
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I’ve seen in this thread:
11 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.
[Thread #91 for this comm, first seen 13th Feb 2026, 17:50] [FAQ] [Full list] [Contact] [Source code]
You forgot the acronym “EVIL.”
I was using a Nvidia 3060 for a while, then had 2 in one box, then switched to a 3090.
The amount of vram is a big factor for decent performance. Getting it to not sound like a predictably repetitive bot though is a whole separate thing that is still kind of elusive.
High RAM for MOE models, high VRAM for dense models, and the highest GPU memory bandwidth you can get.
For stable diffusion models (comfyui), you want high VRAM and bandwidth. Diffusion is a GPU heavy and memory intensive operation.
Software/driver support is very important for diffusion models and comfy UI, so your best experience will be Nvidia cards.
I think realistically you need 80gb+ of RAM for things like qwen image quants (40 for model, 20-40 for LORA adapters in ComfyUI to get output).
I run an 128gb AMD AI 395+ Max rig, qwen image takes 5-20 minutes per 720p qwen image result in ComfyUI. Batching offers an improvement, reducing iterations during prototyping makes a huge difference. I have not tested since the fall though, and the newer models are more efficient.
FYI diffusion models are not really LLMs
Alex Ziskind on YT tests a number of on-site AI devices: youtu.be/QbtScohcdwI
I’m running a couple of smaller chat models on my mid-range new-ish laptop and they’re fairly quick. Try out Jan with something like their jan-nano model on whatever you’ve already got and get a feel for what you can do.
As others said it all depends on what you expect. I run stable diffusion on my gaming pc with 32GB RAM and a AMD 9070xt and it works fine. Did also on a 6800xt before that one died. A GPU with 16GB RAM helps a lot, would say that 12GB is the minimum. Lower will limit you in the models and speed.
For LLM just try it out, they work fine without special hardware for smaller models and as long as you are the only user. There are tools like Jan or lmstudio which make it easy to run.