1U mini PC for AI?
from nagaram@startrek.website to selfhosted@lemmy.world on 30 Aug 16:05
https://startrek.website/post/28335886

My rack is finished for now (because I’m out of money).

Last time I posted I had some jank cables going through the rack and now we’re using patch panels with color coordinated cables!

But as is tradition, I’m thinking about upgrades and I’m looking at that 1U filler panel. A mini PC with a 5060ti 16gb or maybe a 5070 12gb would be pretty sick to move my AI slop generating into my tiny rack.

I’m also thinking about the PI cluster at the top. Currently that’s running a Kubernetes cluster that I’m trying to learn on. They’re all PI4 4GB, so I was going to start replacing them with PI5 8/16GB. Would those be better price/performance for mostly coding tasks? Or maybe a discord bot for shitposting.

Thoughts? MiniPC recs? Wanna bully me for using AI? Please do!

#selfhosted

threaded - newest

Melvin_Ferd@lemmy.world on 30 Aug 16:11 next collapse

The AI hate is overwhelming at times. This is great. What kind of things are you doing with it?

nagaram@startrek.website on 30 Aug 16:47 collapse

Not much. As much as I like LLMs, I don’t trust them for more than rubber duck duty.

Eventually I want to have a Copilot at Home set up where I can feed a notes database and whatever manuals and books I’ve read so it can draw from that when I ask it questions.

The problem is my best GPU is my gaming GPU a 5060ti and its in a Bazzite gaming PC so its hard to get the AI out of it because of Bazzite’s “No I won’t let you break your computer” philosophy, which is why I did it. And my second best GPU is a 3060 12GB which is really good, but if I made a dedicated AI server, I’d want it to be better than my current server.

mierdabird@lemmy.dbzer0.com on 30 Aug 20:15 collapse

I’m actually right there with you, I have a 3060 12gb and tbh I think it’s the absolute most cost effective GPU option for home use right now. You can run 14B models at a very reasonable pace.
Doubling or tripling the cost and power draw just to get 16-24gb doesn’t seem worth it to me. If you really want an AI-optimized box I think something with the new Ryzen Max chips would be the way to go - like an ASUS ROG Z-Flow, Framework Desktop or the GMKtek option whatever it’s called. Apple’s new Mac Minis are also great options. Both Ryzen Max and Apple make use of shared CPU/GPU memory so you can go up 96GB+ at much much lower power draws.

nagaram@startrek.website on 30 Aug 20:43 collapse

A mac is a very funny and objectively correct option

TropicalDingdong@lemmy.world on 30 Aug 16:32 next collapse

This is so pretty 😍🤩💦!!

I’ve been considering a micro rack to support the journey, but primarily for house old laptop chassis as I convert them into proxmox resources.

Any thoughts or comments on you choice of this rack?

nagaram@startrek.website on 30 Aug 16:56 collapse

Not really a lot of thought went into rack choice. I wanted something smaller and more powerful than my several optiplexs I had.

I also decided I didn’t want storage to happen here anymore because I am stupid and only knew how to pass through disks for Truenas. So I had 4 truenas servers on my network and I hated it.

This was just what I wanted at a price I was good with at Like $120. There’s a 3D printable version but I wasn’t interested in that. I do want to 3D print racks and I want to make my own custom ones for the Pis to save space.

But this set up is way cheaper if you have a printer and some patience.

6nk06@sh.itjust.works on 30 Aug 16:41 next collapse

en.wikipedia.org/wiki/ThinkCentre because I didn’t knew it existed.

nagaram@startrek.website on 30 Aug 16:57 next collapse

These are M715q Thinkcentres with a Ryzen Pro 5 2400GE

nagaram@startrek.website on 30 Aug 16:58 collapse

Oh and my home office set up uses Tiny in One monitors so I configured these by plugging them into my monitor which was sick.

I’m a huge fan of this all in one idea that is upgradable.

ZeDoTelhado@lemmy.world on 30 Aug 18:13 next collapse

I have a question about ai usage on this: how do you do this? Every time I see ai usage some sort of 4090 or 5090 is mentioned, so I am curious what kind of ai usage you can do here

[deleted] on 30 Aug 19:01 next collapse
.
teslasdisciple@lemmy.ca on 30 Aug 19:03 next collapse

I’m running ai on an old 1080 ti. You can run ai on almost anything, but the less memory you have the smaller (ie. dumber) your models will have to be.

As for the “how”, I use Ollama and Open WebUI. It’s pretty easy to set up.

kata1yst@sh.itjust.works on 30 Aug 19:33 next collapse

Similar setup here with a 7900xtx, works great and the 20-30b models are honestly pretty good these days. Magistral, Qwen 3 Coder, GPT-OSS are most of what I use

ZeDoTelhado@lemmy.world on 30 Aug 20:46 collapse

I tried a couple of times with Jen ai and local llama, but somehow does not work that well for me.

But at the same time i have a 9070xt, so, not exactly optimal

chaospatterns@lemmy.world on 30 Aug 19:43 next collapse

Your options are to run smaller models or wait. llama3.2:3b fits on my 1080 Ti VRAM and is sufficiently fast. Bigger models will get split between VRAM and RAM and run slower but it’ll work.

Not all models are Gen AI style LLMs. I run GPU based speech to text models on my GPU too for my smart home.

nagaram@startrek.website on 30 Aug 20:24 collapse

With a RTX 3060 12gb, I have been perfectly happy with the quality and speed of the responses. It’s much slower than my 5060ti which I think is the sweet spot for text based LLM tasks. A larger context window provided by more vram or a web based AI is cool and useful, but I haven’t found the need to do that yet in my use case.

As you may have guessed, I can’t fit a 3060 in this rack. That’s in a different server that houses my NAS. I have done AI on my 2018 Epyc server CPU and its just not usable. Even with 109gb of ram, not usable. Even clustered, I wouldn’t try running anything on these machines. They are for docker containers and minecraft servers. Jeff Geerling probably has a video on trying to run an AI on a bunch of Raspberry Pis. I just saw his video using Ryzen AI Strix boards and that was ass compared to my 3060.

But to my use case, I am just asking AI to generate simple scripts based on manuals I feed it or some sort of writing task. I either get it to take my notes on a topic and make an outline that makes sense and I fill it in or I feed it finished writings and ask for grammatical or tone fixes. Thats fucking it and it boggles my mind that anyone is doing anything more intensive then that. I am not training anything and 12gb VRAM is plenty if I wanna feed like 10-100 pages of context. Would it be better with a 4090? Probably, but for my uses I haven’t noticed a difference in quality between my local LLM and the web based stuff.

ZeDoTelhado@lemmy.world on 30 Aug 20:44 collapse

So is not on this rack. OK because for a second I was thinking somehow you were able to run ai tasks with some sort of small cluster.

I have nowadays a 9070xt on my system. I just dabbled on this, but until now I havent been that successful. Maybe I will read more into it to understand better.

nagaram@startrek.website on 30 Aug 20:49 collapse

Ollama + Gemma/Deepseek is a great start. I have only ran AI on my AMD 6600XT and that wasn’t great and everything that I know is that AMD is fine for gaming AI tasks these days and not really LLM or Gen AI tasks.

A RTX 3060 12gb is the easiest and best self hosted option in my opinion. New for <$300 and used even less. However, I was running with a Geforce 1660 ti for a while and thats <$100

Diplomjodler3@lemmy.world on 30 Aug 18:45 next collapse

I’m afraid I’m going to have to deduct one style point for the misalignment of the labels on the mini PCs.

nagaram@startrek.website on 30 Aug 20:13 collapse

That’s fair and justified. I have the label maker right now in my hands. I can fix this at any moment and yet I choose not to.

I’m man feeding orphans to the orphan crushing machine. I can stop this at any moment.

Diplomjodler3@lemmy.world on 30 Aug 20:34 collapse

The machine must keep running!

tofu@lemmy.nocturnal.garden on 30 Aug 18:46 next collapse

Since you seem to be looking for problems to solve with new hardware, do you have a NAS already? Could be tight in 1U but maybe you can figure something out.

nagaram@startrek.website on 30 Aug 20:27 collapse

I do already have a NAS. It’s in another box in my office.

I was considering replacing the PIs with a BOD and passing that through to one of my boxes via USB and virtualizing something. I compromised by putting 2tb Sata SSDs in each box to use for database stuff and then backing that up to the spinning rust in the other room.

How do I do that? Good question. I take suggestions.

possiblylinux127@lemmy.zip on 30 Aug 18:52 next collapse

You also could pickup a powerful CPU with lots of memory bandwidth like a threadripper

nagaram@startrek.website on 30 Aug 20:28 collapse

I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

Cocodapuf@lemmy.world on 31 Aug 14:22 collapse

I think I’m going to have a harder time fitting a threadripper in my 10 inch rack than I am getting any GPU in there.

Well, you could always use a closed loop CPU cooler. (Not necessarily that one)

With the radiator hanging out in back, this shouldn’t need much height.

InternetCitizen2@lemmy.world on 30 Aug 20:53 next collapse

NSFW

Colloidal@programming.dev on 30 Aug 21:14 next collapse

You could combine both 1U fillers and install a 2U PC, which would be easier to find.

nagaram@startrek.website on 30 Aug 21:18 collapse

I was thinking about that now that I have Mac Minis on the mind. I might even just set a mac mini on top next to the modem.

brucethemoose@lemmy.world on 30 Aug 21:33 next collapse

If you can swing $2K, get one of the new mini PCs with an AMD 395 and 64GB+ RAM (ideally 128GB).

They’re tiny, lower power, and the absolute best way to run the new MoEs like Qwen3 or GLM Air for coding. TBH they would blow a 5060 TI out of the water, as having a ~100GB VRAM pool is a total game changer.

I would kill for one on an ITX mobo with an x8 slot.

princessnorah@lemmy.blahaj.zone on 30 Aug 22:15 collapse

I think the mainboard from the Framework Desktop meets your requirements: frame.work/…/framework-desktop-mainboard-amd-ryze…

brucethemoose@lemmy.world on 30 Aug 22:40 next collapse

Nah, unfortunately it is only PCIe 4.0 4x. That’s a bit slim for a dGPU, especially in the future :(

MalReynolds@piefed.social on 30 Aug 23:03 collapse

Pretty sure that's a x4 PCIe slot (admittedly PCIe 5x4, but not many video cards speak PCIe5), would totally trade a usb4 for a x8, but these laptop chips are pretty constrained lanes wise.

brucethemoose@lemmy.world on 31 Aug 01:15 collapse

It’s PCIe 4.0 :(

but these laptop chips are pretty constrained lanes wise

Indeed. I read Strix Halo only has 16 4.0 PCIe lanes in addition to its USB4, which is resonable given this isn’t supposed to be paired with discrete graphics. But I’d happily trade an NVMe slot (still leaving one) for x8.

One of the links to a CCD could theoretically be wired to a GPU, right? Kinda like how EPYC can switch its IO between infinity fabric for 2P servers, and extra PCIe in 1P configurations. But I doubt we’ll ever see such a product.

MalReynolds@piefed.social on 31 Aug 02:26 collapse

It's PCIe 4.0 :(

Boo! Silly me thinking DDR5 implied PCIe5, what a shame.

Feels like they're testing the waters with Halo, hopefully a loud 'waters great, dive in' signal gets through and we get something a bit fitter for desktop use, maybe with more memory (and bandwidth) next gen. Still, gotta love the power usage, makes for one hell of a NAS / AI inference server (and inference isn't that fussy about PCIe bandwidth, hell eGPU works fine as long as the model / expert fits in VRAM.

brucethemoose@lemmy.world on 31 Aug 02:43 collapse

Rumor is it’s successor is 384 bit, and after that their designs are even more modular:

techpowerup.com/…/amds-next-gen-udna-four-die-siz…

Hybrid inference prompt processing actually is pretty sensitive to PCIe bandwidth, unfortunately, but again I don’t think many people intend on hanging an AMD GPU off these Strix Halo boards, lol.

princessnorah@lemmy.blahaj.zone on 01 Sep 06:56 collapse

I don’t know that that is necessarily true. Having a gaming machine that can play any game and dynamically switches between a high-power draw dGPU and a genuinely capable low-power draw iGPU actually sounds amazing. That’s always been possible with every laptop that has a dGPU but their associated iGPU has often been bottom of the barrel bc “why would you use it” for intensive tasks. But a “desktop” build as a lounge room gaming PC, where you can throw whatever at it and it’ll run as quietly as it can, while being able to play AAAs at 4K60, sounds amazing.

brucethemoose@lemmy.world on 01 Sep 12:30 collapse

Eh, actually that’s not what I had in mind:

  • Discrete desktop graphics idle hot. I think my 3090 uses at least 40W doing literally nothing.

  • It’s always better to run big dies slower than small dies at high clockspeeds. In other words, if you underclocked a big desktop GPU to 1/2 its peak clockspeed, it would use less than a fourth of the energy and run basically inaudible… and still be faster than the iGPU. So why keep a big iGPU around?

My use case was multitasking and compute stuff. EG game/use the discrete GPU while your IGP churns away running something. Or combine them in some workloads.

Even the 395 by itself doesn’t make a ton of sense for an HTPC because AMD slaps so much CPU on it. It’s way too expensive and makes it power thirsty. A single CCD (8 cores instead of 16) + the full integrated GPU would be perfect and lower power, but AMD inexplicably does not offer that.

Also, I’ll add that my 3090 is basically inaudible next to a TV… key is to cap its clocks, and the fans barely even spin up.

princessnorah@lemmy.blahaj.zone on 01 Sep 12:43 collapse

That’s all valid for your usecase, but you were saying that you didn’t think many people would use it that way at all and that’s what I was saying I didn’t agree with. As well, a HTPC is kind of a different use case altogether to a lounge room gaming computer. There’s some overlap for sure, but if you want zero compromise gaming then you’re going to want all that CPU.

brucethemoose@lemmy.world on 01 Sep 12:52 collapse

Eh, but you’d be way better off with an X3D CPU in that scenario, which is both significantly faster in games, about as fast outside them (unless you’re dram bandwidth limited) and more power efficient (because they clock relatively low).

You’re right about the 395 being a fine HTPC machine by itself.

But I’m also saying even an older 7900, 4090 or whatever would be way lower power at the same performance as the 395’s IGP, and whisper quiet in comparison. Even if cost is no object. And if that’s the case, why keep a big IGP at all? It just doesn’t make sense to pair them without some weirdly specific use case that can use both at once, or that a discrete GPU literally can’t do because it doesn’t have enough VRAM like the 395 does.

princessnorah@lemmy.blahaj.zone on 01 Sep 13:25 collapse

Correct me if I’m wrong here, but is the 395 not leagues ahead of something like a 4090 when it comes to performance per watt? Here’s a comparison graph of a 4090 against the Radeon 8060S, which is the 395’s iGPU:

<img alt="" src="https://lemmy.blahaj.zone/pictrs/image/c7abb54b-ceff-410d-bd74-b51a8726d889.webp">
Source.

Now that’s apparently running at the 395’s default TDP of 55W so that includes the CPU power. It’s also clear that a 4090 can trounce it on sheer performance when needed. But if we take a look at this next graph:

<img alt="" src="https://lemmy.blahaj.zone/pictrs/image/6f6a4b2f-bebc-4bd7-bbda-2a793cbab2ff.webp">
Source.

This shows that a 4090 has a third of the performance while still running at 130W, more than twice the TDP of the entire 395 APU.

Edit: This was buried in the comments under that second graph but here’s the points scored per Watt on that benchmark: 130W = 66 / 180W = 85 / 220W = 92 / 270W = 84 / 330W = 74 / 420W = 59 / 460W = 55 and this clearly shows the sweet spot for a 4090 is 220W.

brucethemoose@lemmy.world on 01 Sep 13:45 collapse

Oh wow, that’s awesome! I didn’t know folks ran TDP tests like this, just that my old 3090 seems to have a minimum sweet spot around that same same ~200W based on my own testing, but I figured the 4000 or 5000 series might go lower. Apparently not, at least for the big die.

I also figured the 395 would draw more than 55W! That’s also awesome! I suspect newer, smaller GPUs like the 9000 or 5000 series still make the value proposition questionable, but still you make an excellent point.

And for reference, I just checked, and my dGPU hovers around 30W idle with no display connected.

princessnorah@lemmy.blahaj.zone on 01 Sep 14:19 collapse

You can boost the 395 up to 120W, which might be where Framework is pushing it too, but those benchmarks are labelled 55W and that’s what AMD says is the default clock without adjustment. I’d love to see how the benchmarks compare at that higher boost but I’d imagine it’s diminishing returns similar to most GPUs. I think the benefit to using it in a lounge gaming PC would be the super low power draw, but you would need to figure out a display MUX switch and I don’t think that’s simple with desktop cards. Maybe something with a 5090 mobile would be the go at that point, but I have no idea how that compares to the 395 and whether it’s worth it.

brucethemoose@lemmy.world on 01 Sep 14:42 collapse

Mobile 5090 would be an underclocked, binned desktop 5080, AFAIK:

en.wikipedia.org/…/List_of_Nvidia_graphics_proces…

In KCD2 (a fantastic CryEngine game, a great benchmark IMO), at QHD, the APU is a hair less half as fast. For instance, 39 FPS at QHD vs 84 FPS for the mobile 5090:

notebookcheck.net/Nvidia-GeForce-RTX-5090-Laptop-…

notebookcheck.net/AMD-Radeon-8060S-Benchmarks-and…

Synthetic benchmarks between the two

But these are both presumably running at high TDP (150W for the 5090). Also, the mobile 5090 is catastrophically overpriced and inevitably tied to a weaker CPU, whereas the APU is a monster of a CPU. So make of that what you will.

TexasDrunk@lemmy.world on 30 Aug 22:08 next collapse

I didn’t even know these sorts of mini racks existed. now I’m going to have to get one for all my half sized preamps if they’ll fit. That would solve like half the problems with my studio room and may help bring back some of my spark for making music.

I have no recs. Just want to say I’m so excited to see this. I can probably build an audio patch panel.

GirthBrooks@lemmy.world on 30 Aug 23:32 next collapse

<img alt="" src="https://lemmy.world/pictrs/image/4042cc49-a810-4289-a34e-6346e5a1b0ee.jpeg">

Looking good! Funny I happen across this post when I’m working on mine as well. As I type this I’m playing with a little 1.5” transparent OLED that will poke out of the rack beside each pi, scrolling various info (cpu load/temp, IP, LAN traffic, node role, etc)

ripcord@lemmy.world on 31 Aug 04:52 collapse

What OLED specifically and what will you be using to drive it?

GirthBrooks@lemmy.world on 31 Aug 18:08 collapse

Waveshare 1.51” transparent OLED. Comes with driver board, ribbon & jumpers. If you type it in Amazon it’s the only one that pops, just make sure it says transparent. Plugs into GPIO of my Pi 5s. The Amazon listing has a user guide you can download so make sure to do that. I was having trouble figuring it out until I saw that thing. Runs off a python script but once I get it behaving like I want I’ll add it to systemd so it boots on startup.

Imma dummy so I used ChatGPT for most of it, full …ahem… transparency. 🤷🏻‍♂️

I’m modeling a little bracket in spaceclaim today & will probably print it in transparent PETG. I’ll post a pic when I’m done!

thejml@sh.itjust.works on 30 Aug 23:34 next collapse

Honestly, If you are delving into Kubernetes, just add some more of those 1L PCs in there. I tend to find them on ebay cheaper than Pi’s. Last year I snagged 4x 1L Dells with 16GB RAM for $250 shipped. I swapped some RAM around, added some new SSD’s and now have 3x Kube masters, 3x Kube worker nodes and a few VMs running a Proxmox cluster across 3 of the 1L’s with 32GB and a 512GbB SSD each and its been great. The other one became my wife’s new desktop.

Big plus, there are so many more x86_64 containers out there compared to Pi compatible ARM ones.

umbrella@lemmy.ml on 31 Aug 03:26 next collapse

for your usecase, i’d get an external gpu to plug in to one of these juicy thinkstations right there. bonus for the modularity of having an actual gpu instead of relying on whatever crappy laptop gpu minipc manufacturers put in there.

you could probably virtualize a sick gaming setup with this rig too. stream it to your phone/laptop.

nice setup btw.

hendrik@palaver.p3x.de on 31 Aug 07:09 next collapse

Well, I always advocate for using the stuff you have. I don't think a Discord bot needs four new RasPi 5. That's likely to run on a single RasPi3. And as long as they're sitting idle, it doesn't really matter which model number they have... So go ahead and put something on your hardware, and buy new one once you've maxed out your current setup.

I'm not educated on Bazzite. Maybe tools like Distrobox or other container solutions can help running AI workloads on the gaming rig. It's likely easier to run a dedicated AI server, but I started learning about quantization, tested some models on my main computer with the help of ollama, KoboldCPP and some random Docker/Podman containers. I'm not saying this is the preferrable solution. But definitely enough to get started with AI. And you can always connect the computers within your local network, write some server applications and have them hook into ollama's API and it doesn't really matter whether that runs on your gaming pc or a server (as long as the computer in question is turned on...)

nagaram@startrek.website on 31 Aug 10:56 next collapse

Ollama and all that runs on it its just the firewall rules and opening it up to my network that’s the issue.

I cannot get ufw, iptables, or anything like that running on it. So I usually just ssh into the PC and do a CLI only interaction. Which is mostly fine.

I want to use OpenWebUI so I can feed it notes and books as context, but I need the API which isn’t open on my network.

hendrik@palaver.p3x.de on 01 Sep 13:36 collapse

Thanks for the info. Some day I'll try the shiny modern distros and learn the little peculiarities. I use a weird mix of Debian, NixOS and LMDE and it's relatively straightforward to add firewall rules to those, both dynamically to nftables and to the persistent config... And I believe Debian didn't even come with firewalling out of the box... But I understand Debian might not be the best choice for gaming and there is for example some extra work involved to get the latest Nvidia drivers. Neither is it an atomic distro.

nagaram@startrek.website on 01 Sep 22:02 collapse

Honestly if you’re not gaming or playing with new hardware, there is absolutely no point.

I’ve considered swapping this computer over to Fedora for a hot minute, but it really is a gaming PC and I should stop trying to break it.

Flax_vert@feddit.uk on 01 Sep 13:00 collapse

You could probably run several discord bots on a Raspberry Pi 3, provided they aren’t public and popular

Korhaka@sopuli.xyz on 31 Aug 15:06 next collapse

Ohh nice, I want it. Don’t really know what I would use all of it for, but I want it (but don’t want to pay for it).

Currently been thinking of getting an N150 mini PC. Setup proxmox and a few VMs. At the very least pihole, location to dump some backups and also got a web server for a few projects.

Flax_vert@feddit.uk on 01 Sep 12:59 next collapse

How much did this cost?

nagaram@startrek.website on 01 Sep 21:35 collapse

The Lenovo Thinkcentre M715q were $400 total after upgrades. I fortunately had 3 32 GB kits of ram from my work’s e-waste bin but if I had to add those it would probably be $550 ish The rack was $120 from 52pi I bought 2 extra 10in shelves for $25 each the Pi cluster rack was also $50 (shit I thought it was $20. Not worth) Patch Panel was $20 There’s a UPS that was $80 And the switch was $80

So in total I spent $800 on this set up

To fully replicate from scratch you would need to spend $160 on raspberry pis and probably $20 on cables

So $1000 theoratically

Flax_vert@feddit.uk on 01 Sep 13:01 next collapse

A single raspberry pi 5 can host an entire website

nagaram@startrek.website on 01 Sep 22:00 collapse

True, but I have an addiction and that’s buying stuff to cope with all the drawbacks of late stage capitalism.

I am but a consumer who must be given reasons to consume.

Flax_vert@feddit.uk on 01 Sep 23:19 collapse

Scatter them about the place. Make appliances smart. Would be more interesting than a cluster.

lepinkainen@lemmy.world on 01 Sep 15:22 next collapse

This will get downvoted to oblivion because this is Lemmy:

Get a Mac Mini. Any M-series model with 32GB of memory will run local models at decent speeds and will be cheaper than just a 5xxx series GPU

And it’ll fit your cool rack 😀

muppeth@scribe.disroot.org on 01 Sep 19:16 collapse

Wow! very cool rack you got there. I too started using mini pcs for local test servers or general home servers. But unlike yours mine are just dumped behind the screen on my desk (3 in total). For LLM stuff atm I use 16GB radeon but thats connected to my desktop. In the future I would love to build a proper rack like yours and perhaps move the GPU to a dedicated minipc.

As for the upgrades, like what others stated already, I would just go for more pc’s rather then rpi.

nagaram@startrek.website on 01 Sep 20:49 collapse

The PIs were honestly because I had them.

I think I’d rather use them for something else like robotics or a Birdnet pi.

But the pi rack was like $20 and hilarious.

The objectively correct answer for more compute is more mini PCs though. And I’m really thinking about the Mac Mini option for AI.

muppeth@scribe.disroot.org on 02 Sep 15:36 collapse

is the mac mini really that good? running 12-14b models on my radeon rx 7600xt is ok’ish but i do “feel it” while running 7-8b models sometimes just doesn’t feel enough. I wonder where does mac mini land in here.

nagaram@startrek.website on 02 Sep 16:01 next collapse

From what I understand its not as fast as a consumer Nvdia card but but close.

And you can have much more “Vram” because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000

Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.

$200 and its the 64GB model with 2x 4090’s amounts of Vram.

Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.

nagaram@startrek.website on 02 Sep 16:06 collapse

From what I understand its not as fast as a consumer Nvdia card but but close.

And you can have much more “Vram” because they do unified memory. I think the max is 75% of total system memory goes to the GPU. So a top spec Mac mini M4 Pro with 48GB of Ram would have 32gb dedicated to GPU/NPU tasks for $2000

Compare that to JUST a 5090 32GB for $2000 MSRP and its pretty compelling.

$200 and its the 64GB model with 2x 4090’s amounts of Vram.

Its certainly better than the AMD AI experience and its the best price for getting into AI stuff so says nerds with more money and experience than me.