Guide to Self Hosting LLMs Faster/Better than Ollama
from brucethemoose@lemmy.world to selfhosted@lemmy.world on 14 Oct 04:29
https://lemmy.world/post/20828515

I see a lot of talk of Ollama here, which I personally don’t like because:

So, here’s a quick guide to get away from Ollama.

Now for actually installing the runtime. There are a great number of inference engines supporting different quantizations, forgive the Reddit link but see: old.reddit.com/…/a_large_table_of_inference_engin…

As far as I am concerned, 3 matter to “home” hosters on consumer GPUs:

Now, there are also reasons I don’t like llama.cpp, but one of the big ones is that sometimes its model implementations have… quality degrading issues, or odd bugs. Hence I would generally recommend TabbyAPI if you have enough vram to avoid offloading to CPU, and can figure out how to set it up. So:

This can go wrong, if anyone gets stuck I can help with that.

Again, browse huggingface and pick an exl2 quantization that will cleanly fill your vram pool + the amount of context you want to specify in TabbyAPI. Many quantizers such as bartowski will list how much space they take up, but you can also just look at the available filesize.

Sorry for the wall of text! I can keep going, discussing kobold.cpp/llama.cpp, Aphrodite, exotic quantization and other niches like that if anyone is interested.

#selfhosted

threaded - newest

AliasAKA@lemmy.world on 14 Oct 04:54 next collapse

Bookmarked and will come back to this. One thing that may be if interest to add is for AMD cards with 20gb of ram. I’d suppose that it would be Qwen 2.5 34B with maybe less strict quant or something.

Also, it may be interesting to look at the AllenAI molmo related models. I’m kind of planning to do this myself but haven’t had time as yet.

brucethemoose@lemmy.world on 14 Oct 05:13 collapse

Yep. 20GB is basically 24GB, though its too tight for 70B models.

One quirk for 7900 owners is that installing flash attention for long context usage can be a pain. Apparently it is doable now, I need to dig up the link, but it might just be easier to use kobold.cpp rocm with its native flash attention.

As for vision models, that is a whole different can of worms. Exllama does not support this, so you’d need a framework that does.

If you are looking for niche models, check out MiniG (which is a continued pretrain of the already very excellent GLM4-9B): huggingface.co/bartowski/miniG-GGUF

Llama.cpp support is recent, though I’m not 100% sure its completely fixed. It should work in Aphrodite as well.

kitnaht@lemmy.world on 14 Oct 05:03 next collapse

If your “FIRST STEP” is to choose an OS: Fuck that.

You should never have to change your OS just to use this crap. It’s all written in Python. It should work on every OS available. Your first step is installing the prerequisites.

If you’re using something like Continue for local coding tasks, CodeQwen is awesome, and you’ll generally want a context window of 120k or so because for coding, you want all the code context - or else the LLM starts spitting out repetitious stuff, or can’t ingest all of your context so it’ll rewrite stuff that’s already there.

gravitas_deficiency@sh.itjust.works on 14 Oct 05:07 next collapse

Wtf are you talking about. PCIe passthrough exists.

kitnaht@lemmy.world on 14 Oct 05:09 next collapse

Why would you even bother trying to run this all through a VM when you can just run it directly? If you’re to the point of using VMs, you don’t need this tutorial anyways.

Are you seriously telling me you’re jumping through all the hoops to spin up a VM on Linux, and then doing all the configuration for GPU passthrough, because you can’t just figure out how to run it locally?

gravitas_deficiency@sh.itjust.works on 14 Oct 05:14 collapse

Bro this is a community for sharing knowledge and increasing the technical aptitude of fellow users by doing said sharing. Maybe instead of shitting on a pretty solid digest of the fundamentals of setting up something like this, try adding to the body of knowledge instead.

brucethemoose@lemmy.world on 14 Oct 05:20 collapse

I would not recommend that for performance reasons, AFAIK.

Windows is fine, I should make that more clear.

gravitas_deficiency@sh.itjust.works on 14 Oct 05:24 collapse

Huh, really? Is there that much of a perf hit using passthrough? I’d have assumed that the bottleneck isn’t actually the PCIE, so much as it is the beefiness of the GPU crunching the model.

brucethemoose@lemmy.world on 14 Oct 05:29 collapse

I have not tested WSL or VMs in Windows in awhile, but my impression is that “it depends” and you should use the native windows version unless you are having some major installation issues.

brucethemoose@lemmy.world on 14 Oct 05:35 next collapse

CodeQwen 1.5 is pretty old at this point, afaik made obsolete by their latest release.

The Qwen models (at least 2.5) are really only good to like 32K, which is still a ton of context. But I’ve been testing Qwen 32B at 64K -90K and even that larger model is… Not great.

32K is generally enough to get the jist of whatever you’re trying to fill in.

sturlabragason@lemmy.world on 14 Oct 05:40 next collapse

Choose OS is very relevant when doing cloud stuff.

brucethemoose@lemmy.world on 14 Oct 06:17 collapse

Or setting up a home server, which I figured some here would do.

L_Acacia@lemmy.one on 14 Oct 08:12 collapse

llama.cpp works on windows too (or any os for that matter), though linux will vive you better performances

morrowind@lemmy.ml on 14 Oct 05:10 next collapse

Honestly, I’m just gonna stick to llamafile. I really don’t want to mess around with python. It also causes way more trouble than I anticipate

brucethemoose@lemmy.world on 14 Oct 05:24 collapse

Llamafile is fine, but it still leaves a lot of performance on the table.

You can setup kobold.cpp with Q8 flash attention without ever having to install pytorch, which is the real headache. It does have a little python launch script, but its super minimal.

You can use the native llama.cpp server for absolutely zero python usage.

Grimy@lemmy.world on 14 Oct 05:19 next collapse

vLLM can only run on linux but it’s my personal favorite because of the speed gain when doing batch inference.

brucethemoose@lemmy.world on 14 Oct 05:27 collapse

Aphrodite is a fork of vllm. You should check it out!

If you are looking for raw batched speed, especially with some redundant context, I would actually recommend sglang instead. Check out its experimental flags too.

sturlabragason@lemmy.world on 14 Oct 05:41 next collapse

Frontendwise; Librechat is pretty cool.

kwa@lemmy.zip on 14 Oct 05:59 next collapse

Thanks!

For people on MacOS, is there a better alternative than croco.cpp?

brucethemoose@lemmy.world on 14 Oct 06:14 collapse

If you download the source, you should be able to build it for metal? Croco.cpp is just a fork of kobold.cpp

I think lmstudio added MLX support, but otherwise you are stuck with anything llama.cpp based. I’d probably download llama.cpp directly and use the llama server first.

kwa@lemmy.zip on 14 Oct 15:57 collapse

I tried llama.cpp with llama-server and Qwen2.5 Coder 1.5B. Higher parameters just output garbage and I can see an OutOfMemory error in the logs. When trying the 1.5B model, I have an issue where the model will just stop outputting the answer, it will stop mid sentence or in the middle of a class. Is it an issue with my hardware not being performant enough or is it something I can tweak with some parameters?

[deleted] on 14 Oct 16:40 next collapse
.
brucethemoose@lemmy.world on 14 Oct 16:43 collapse

You can only allocate so much to metal backends, and if you are on (say) an 8GB Mac there won’t be much RAM left for the LLM itself.

But still, use a tighter quantization (like an IQ4 or IQ3_KM) of Qwen Coder 7B, and close as many background programs as you can. It should be small enough to fit.

kwa@lemmy.zip on 14 Oct 16:51 collapse

I have a MacBook Pro M1 Pro with 16GB RAM. I closed a lot of things and managed to have 10GB free, but that seems to still not be enough to run the 7B model. For the answer being truncated, it seems to be a frontend issue. I tried open-webui connected to llama-server and it seems to be working great, thank you!

brucethemoose@lemmy.world on 14 Oct 17:02 next collapse

Try reducing the context size, and make sure Q8/Q8 flash attention is enabled with flags.

I’d link a specific GGUF quantization, but huggingface seems to be down for me!

brucethemoose@lemmy.world on 14 Oct 17:09 collapse

Try this one at least, it should still leave plenty of RAM free: huggingface.co/…/Qwen2.5-Coder-7B-Instruct-IQ4_XS…

kwa@lemmy.zip on 14 Oct 18:18 collapse

Indeed, this model is working on my machine. Can you explain the difference with the one I tried before?

brucethemoose@lemmy.world on 14 Oct 18:31 collapse

It’s probably much smaller than whatever other GGUF you got, aka more tightly quantized.

Look at the filesize, thats basically how much RAM it takes.

kwa@lemmy.zip on 15 Oct 12:47 collapse

Well this is what I quite don’t understand: I was trying to run the q3_k_m which is 3.81GB and it was failing with OutOfMemory error. The one you provided IQ4_XS is 4.22GB and is working fine.

brucethemoose@lemmy.world on 15 Oct 14:16 collapse

Shrug did you grab an older Qwen GGUF? The series goes pretty far back, and its possible you grabbed one that doesn’t support GQA or something like that.

Doesn’t really matter though, as long as it works!

banghida@lemm.ee on 14 Oct 06:00 next collapse

Does any of this work on Intel GPUs?

brucethemoose@lemmy.world on 14 Oct 06:12 collapse

Nope, Intel is more of a pain. Your best bet is the llama.cpp server’s SYCL backend (or kobold.cpp if they have a build for that).

They have an OpenVINO server, but its not for the faint of heart.

banghida@lemm.ee on 14 Oct 07:00 collapse

Just thinking down the road, I am thinking of buying a next gen Intel GPU.

brucethemoose@lemmy.world on 14 Oct 12:13 collapse

Support will get much better if they sell something decent (read: >32GB and cheap), but if they don’t, TBH I would not expect great support. Arc is in kinda a make or break moment, and right now Intel is breaking it hard by delaying Battlemage so much. I too an am Arc hopeful, but it really feels like Intel is going to cancel it.

sleep_deprived@lemmy.world on 14 Oct 06:03 next collapse

I’d be interested in setting up the highest quality models to run locally, and I don’t have the budget for a GPU with anywhere near enough VRAM, but my main server PC has a 7900x and I could afford to upgrade its RAM - is it possible, and if so how difficult, to get this stuff running on CPU? Inference speed isn’t a sticking point as long as it’s not unusably slow, but I do have access to an OpenAI subscription so there just wouldn’t be much point with lower quality models except as a toy.

brucethemoose@lemmy.world on 14 Oct 06:10 collapse

CPU inference is, unfortunately, slow, even on my 7800X3D.

The one that might be interesting is deepseek code v2 lite, as its a very fast MoE model. IIRC microsoft also released a Phi MoE thats good for CPU.

Keep an eye out for upcoming bitnet models.

Dont bother upgrading RAM though. You will be bandwidth limited anyway, and it doesn’t make a huge difference.

vividspecter@lemm.ee on 14 Oct 08:01 next collapse

Do you have any recommendations for a Perplexity.ai type setup? It’s one of the few recent innovations I’ve found useful. I’ve heard of Perplexica and a few others, but not sure what is the best approach.

projectmoon@lemm.ee on 14 Oct 11:14 next collapse

Perplexica works. It can understand ollama and custom OpenAI providers.

LiveLM@lemmy.zip on 14 Oct 16:02 collapse

What does Perplexity do different than other AI solutions?
Heard about it but haven’t tried yet

Caboose12000@lemmy.world on 14 Oct 19:36 collapse

I haven’t heard about it before today but I tried asking it what separates it from other LLMs and apparently the answer is just that it does a google search and shows you the source its summarizing, which if true is not very compelling, and if a hallucination or missing details then its at least not very compelling as a search replacement

shaserlark@sh.itjust.works on 14 Oct 08:43 next collapse

I run a Mac Mini as a home server because it’s great for hardware transcoding, I was wondering if I could host an LLM locally. I work with python so that wouldn’t be an issue but I have no idea how to do CUDA or work on low level code. Is there anything I need to consider? Would probably start with a really small model.

thirdBreakfast@lemmy.world on 14 Oct 09:37 collapse

If it’s an M1, you def can and it will work great. With Ollama.

shaserlark@sh.itjust.works on 14 Oct 10:04 collapse

Yeah it’s an M1 16GB, sounds awesome I’ll try, thanks a lot for the guide it’s super helpful. I just got the Mac Mini for jellyfin but this is an unexpected use case where the server comes in very handy.

brucethemoose@lemmy.world on 14 Oct 12:25 collapse

For that you probably want the llama.cpp server and a Qwen2 14B IQ3 quantization.

16GB is kinda tight though, especially if you’re running other stuff in the background.

eskuero@lemmy.fromshado.ws on 14 Oct 09:26 next collapse

Ollama has had for a while an issue opened abou the vulkan backend but sadly it doesn’t seem to be going anywhere.

brucethemoose@lemmy.world on 14 Oct 12:11 collapse

Thats because llama.cpp’s vulkan backend is kinda slow and funky, unfortunately.

eskuero@lemmy.fromshado.ws on 14 Oct 16:10 collapse

Better than anything. I run through vulkan on lm studio because rocm on my rx 5600xt is a heavy pain

brucethemoose@lemmy.world on 14 Oct 16:49 collapse

The best hope for you is ZLUDA’s revival. It’s explicitly targeting LLM runtimes now, and RDNA1 (aka your 5600XT) is the oldest supported generation.

www.phoronix.com/news/ZLUDA-Third-Life

TBH you should consider using free llama/qwen APIs as well, when appropriate.

thirdBreakfast@lemmy.world on 14 Oct 09:43 next collapse

Guide to Self Hosting LLMs with Ollama.

  • Download and run Ollama
  • Open a terminal, type ollama run llama3.2
projectmoon@lemm.ee on 14 Oct 11:07 next collapse

Super useful guide. However after playing around with TabbyAPI, the responses from models quickly become jibberish, usually halfway through or towards the end. I’m using exl2 models off of HuggingFace, with Q4, Q6, and FP16 cache. Any tips? Also, how do I control context length on a per-model basis? max_seq_len in config.json?

brucethemoose@lemmy.world on 14 Oct 12:15 collapse

What model, specifically? What other settings?

Context length is in the TabbyAPI config, yes.

projectmoon@lemm.ee on 14 Oct 12:41 collapse

I tried it with both Qwen 14b and Llama 3.1. Both were exl2 quants produced by bartowski.

brucethemoose@lemmy.world on 14 Oct 12:52 collapse

What context length? Neither of them likes to go over 32K.

And what kind of jibberish? If they are repeating, you need to change sampling settings. Incoherence… Also probably sampling settings, lol.

projectmoon@lemm.ee on 14 Oct 13:07 collapse

Context was set to anywhere between 8k and 16k. It was responding in English properly, and then about halfway to 3/4s of the way through a response, it would start outputting tokens in either a foreign language (Russian/Chinese in the case of Qwen 2.5) or things that don’t make sense (random code snippets, improperly formatted text). Sometimes the text was repeating as well. But I thought that might have been a template problem, because it seemed to be answering the question twice.

Otherwise, all settings are the defaults.

brucethemoose@lemmy.world on 14 Oct 13:22 collapse

Hmm, what’s the frontend?

And the defaults can sometimes be really bad lol. Qwen absolutely outputs chinese for me with a high temperature.

projectmoon@lemm.ee on 14 Oct 13:27 collapse

OpenWebUI connected tabbyUI’s OpenAI endpoint. I will try reducing temperature and seeing if that makes it more accurate.

fmstrat@lemmy.nowsci.com on 14 Oct 11:30 next collapse

Do you have any recommendation for integration into VSCode, specifically with something like Continue?

brucethemoose@lemmy.world on 14 Oct 12:21 collapse

I am “between” VScode extensions TBH, but any model that supports FIM (like Qwen or Mistral Code) should work fine.

Konraddo@lemmy.world on 14 Oct 12:34 next collapse

I know this is not the theme of this post, but I wonder if there’s an LLM that doesn’t hallucinate when asked to summarize information of a group of documents. I tried Gpt4all for simple queries like finding out which documents mentioned a certain phrase. It often gave me filenames that didn’t actually exist. Hallucinating contents is one thing but making up data source is just horrible.

brucethemoose@lemmy.world on 14 Oct 12:40 collapse

That’s absolutely on topic, check out huggingface.co/…/Hallucination-evaluation-leaderb…

Command R is built for this if you have the vram to swing it, otherwise GLM4 (or MiniG as linked below) is great. The later, unfortunately, doesn’t work with TabbyAPI, so you have to use something like Kobold.cpp.

You also have to use very low (basically zero) temperature and be careful with other sampling settings, and watch your context length.

There are more sophisticated RAG setups some of these UIs (like open Web UI) integrate, and sometimes you’ll need to host an embeddings model alongside the llm for that to work.

DarkThoughts@fedia.io on 14 Oct 12:36 next collapse

I just can't get ROCm / gpu generation to work on Bazzite, like at all. It seems completely cursed.
I tried koboldcpp through a Fedora distrobox and it didn't even show any hardware options. Tried through an Arch AUR package through distrobox and the ROCm option is there but ends with a CUDA error. lol
The Vulkan option works but seems to still use the CPU more than the GPU and is consequently still kinda slow and I struggle to find a good model for my 8GB card. Fimbulvetr-10.7B-v1-Q5_K_M for example was still too slow to be practical.

Tried LM Studio directly in Bazzite and it also just uses the CPU. It also is very obtuse on how to connect to it with SillyTavern, as it asks for an API key? I managed it once in the past but I can't remember how but it also ended up stopping generating anything after a few replies.

Krita's diffusion also only runs on the CPU, which is abysmally slow, but I'm not sure if they expect Krita to be build directly on the system for ROCm support to work.

I'm not even trying to get SDXL or something to run at this point, since that seems to be still complicated enough even on a regular distro.

brucethemoose@lemmy.world on 14 Oct 12:46 collapse

I don’t like Fedora because its CUDA support is third party, and AFAIK they dont natively package ROCm. And its too complex to use through something like distrobox… I don’t want to tell you to switch OSes, but you’d have a much better time with CachyOS, which is also optimized for Steam gaming.

Alternatively you could try installing rocm images through docker, but you have to make sure GPU passthrough is working).

It also depends on your GPU. If you are on an RX 580, you can basically kiss rocm support goodbye, and might want to investigate mlc-llm’s vulkan backend.

Fimbulvetr is ancient now, your go to models are Qwen 2.5 14B at short context or llama 3.1 8B/Qwen 2.5 7B at longer context.

DarkThoughts@fedia.io on 14 Oct 13:02 collapse

I distrohopped so much after each previous distro eventually broke and me clearly not being smart enough to recover. I'm honestly kinda sick of it, even if the immutable nature also annoys the shit out of me.

My GPU is a 6650 XT, which should in principle work with ROCm.

Which model specifically are you recommending? Llama-3.1-8B-Lexi-Uncensored-V2-GGUF? Because the original meta-llama ones are censored to all hell and Huggingface is not particularly easy to navigate, on top of figuring out the right model size & quantization being extremely confusing.

brucethemoose@lemmy.world on 14 Oct 13:13 collapse

Depends what you mean by censored. I never have a problem with Qwen or llama as long as I give them the right prompt and system prompt. Its not like an API model, they have to continue whatever response you give them.

And… For what? If you are just looking for like ERP, check out drummer’s finetunes. Otherwise I tend to avoid “uncensored” finetunes as they dumb the model down a bit, but take your pick: huggingface.co/models?sort=modified&search=14B

But you are going to struggle if you can’t get rocm working beyond very small context, as that means no flash attention anywhere.

Also, assuming you end up using kobold.cpp-rocm instead, I would use a IQ3_M or IQ3_XS GGUF quantization of a 14B model.

DarkThoughts@fedia.io on 14 Oct 14:37 collapse

Well, anything remotely raunchy gets a "I cannot participate in explicit content" default reply.

I am using the rocm install of koboldcpp but as said, the ROCm option errors out with a CUDA error for some reason.

brucethemoose@lemmy.world on 14 Oct 14:50 next collapse

<img alt="Thonking" src="https://lemmy.world/pictrs/image/5e736a10-c00d-4753-a314-d360c89398db.png"> What’s the error? Did you manually override your architecture as an environment variable?

old.reddit.com/r/ROCm/comments/…/kgeuguq/

github.com/…/ROCmLibs-for-gfx1103-AMD780M-APU?tab…

You are gfx1032

DarkThoughts@fedia.io on 14 Oct 14:58 collapse
ggml_cuda_compute_forward: ADD failed
CUDA error: shared object initialization failed
  current device: 0, in function ggml_cuda_compute_forward at ggml/src/ggml-cuda.cu:2365
  err
ggml/src/ggml-cuda.cu:107: CUDA error

I didn't do anything past using yay to install the AUR koboldcpp-hipblas package, and customtkinter, since the UI wouldn't work otherwise. The koboldcpp-rocm page very specifically does not mention any other steps in the Arch section and the AUR page only mentions the UI issue.

brucethemoose@lemmy.world on 14 Oct 15:02 collapse

mmmm I would not use the AUR version, especially on Fedora. It probably relies on a bunch of arch system packages, among other things.

Try installing the rocm fork directly, with its script: github.com/YellowRoseCx/koboldcpp-rocm?tab=readme…

EDIT: There does seem to be a specific quirk related to Fedora.

DarkThoughts@fedia.io on 14 Oct 15:06 collapse

I'm not using Fedora, I'm using Bazzite, which is immutable based on SilverBlue. I use an Arch distrobox for this since I can't really install anything directly into the system. The script is what I tried originally in a Fedora distrobox which did not work at all.

brucethemoose@lemmy.world on 14 Oct 14:53 collapse

Oh, and again, for raunchy, there are explicit “RP” finetunes, like: huggingface.co/TheDrummer

But you just need to set a good system prompt or start a reply with “Sure,” and plain qwen or llama will write out unspeakable things.

WolfLink@sh.itjust.works on 14 Oct 14:01 next collapse

Could I run larger LLMs with multiple GPUs? E.g. would 2x3090 be able to run the 48GB models? Would I need NVLink to make it work?

brucethemoose@lemmy.world on 14 Oct 14:18 next collapse

Absolutely.

Only aphrodite (and other enterprise backends like vllm/sglang) can make use of NVLink, but even exllama or mlc-llm split across GPUs nicely over PCIe, no NVLink needed.

2x 3090s or P40s is indeed a popular config among local runners, and is the perfect size for a 70B model. Some try to squeeze Mistral-Large in, but IMO its too tight a fit.

sntx@lemm.ee on 15 Oct 01:55 collapse

Is there an inherent benefit for using NVLINK? Should I specifically try out Aprodite over the other recommendations when having 2x 3090 with NVLINK available?

brucethemoose@lemmy.world on 15 Oct 14:08 collapse

So there are multiple ways to split models across GPUs, (layer splitting, which uses one GPU then another, expert parallelism, which puts different experts on different GPUs), but the way you’re interested in is “tensor parallelism”

This requires a lot of communication between the GPUs, and NVLink speeds that up dramatically.

It comes down to this: If you’re more interested in raw generation speed, especially with parallel calls of smaller models, and/or you don’t care about long context (with 4K being plenty), use Aphrodite. It will ultimately be faster.

But if you simply want to stuff the best/highest quality model you can at VRAM, especially at longer context (>4K), use TabbyAPI. Its tensor parallelism only works over PCIe, so it will be a bit slower, but it will still stream text much faster than you can read. It can simply hold bigger, better models at higher quality in the same 48GB VRAM pool.

brucethemoose@lemmy.world on 14 Oct 15:44 collapse

Also, AMD is not off the table for multi-gpu. I know some LLM runners are buying used 32GB MI100s.

sntx@lemm.ee on 15 Oct 02:05 next collapse

Thanks for the writeup! So far I’ve been using ollama, but I’m always open for trying out alternatives. To be honest, it seems I was oblivious to the existence of alternatives.

Your post is suggesting that the same models with the same parameters generate different result when run on different backends?

I can see how the backend would have an influence hanfling concurrent api calls, ram/vram efficiency, supported hardware/drivers and general speed.

But going as far as having different context windows and quality degrading issues is news to me.

brucethemoose@lemmy.world on 15 Oct 14:14 collapse

Your post is suggesting that the same models with the same parameters generate different result when run on different backends

Yes… sort of. Different backends support different quantization schemes, for both the weights and the KV cache (the context). There are all sorts of tradeoffs.

There are even more exotic weight quantization schemes (ALQM, VPTQ) that are much more VRAM efficient than llama.cpp or exllama, but I skipped mentioning them (unless somedone asked) because they’re so clunky to setup.

Different backends also support different samplers. exllama and kobold.cpp tend to be at the cutting edge of this, with things like DRY for better long-form generation or grammar.

possiblylinux127@lemmy.zip on 15 Oct 15:29 next collapse

Or we could all just use ollama. It is way simpler and works fine without a GPU even. I don’t really understand the problem with it.

brucethemoose@lemmy.world on 15 Oct 21:54 collapse

It’s less optimal.

On a 3090, I simply can’t run Command-R or Qwen 2.5 34B well at 64K-80K context with ollama. Its slow even at lower context, the lack of DRY sampling and some other things majorly hit quality.

Ollama is meant to be turnkey, and thats fine, but LLMs are extremely resource intense. Sometimes the manual setup/configuration is worth it to squeeze out every ounce of extra performance and quantization quality.

Even on CPU-only setups, you are missing out on (for instance) the CPU-optimized quantizations llama.cpp offers now, or the more advanced sampling kobold.cpp offers, or more fine grained tuning of flash attention configs, or batched inference, just to start.

And as I hinted at, I don’t like some other aspects of ollama, like how they “leech” off llama.cpp and kinda hide the association without contributing upstream, some hype and controversies in the past, and hints that they may be cooking up something commercial.

possiblylinux127@lemmy.zip on 15 Oct 22:05 collapse

I’m not going to lie I don’t really see evidence supporting you claims. What evidence do you have?

Ollama is llama.cpp with a web wrapper and some configs to make sure it works.

brucethemoose@lemmy.world on 15 Oct 22:24 collapse

To go into more detail:

  • Exllama is faster than llama.cpp with all other things being equal.

  • exllama’s quantized KV cache implementation is also far superior, and nearly lossless at Q4 while llama.cpp is nearly unusable at Q4 (and needs to be turned up to Q5_1/Q4_0 or Q8_0/Q4_1 for good quality)

  • With ollama specifically, you get locked out of a lot of knobs like this enhanced llama.cpp KV cache quantization, more advanced quantization (like iMatrix IQ quantizations or the ARM/AVX optimized Q4_0_4_4/Q4_0_8_8 quantizations), advanced sampling like DRY, batched inference and such.

It’s not evidence or options… it’s missing features, thats my big issue with ollama. I simply get far worse, and far slower, LLM responses out of ollama than tabbyAPI/EXUI on the same hardware, and there’s no way around it.

Also, I’ve been frustrated with implementation bugs in llama.cpp specifically, like how llama 3.1 (for instance) was bugged past 8K at launch because it doesn’t properly support its rope scaling. Ollama inherits all these quirks.

I don’t want to go into the issues I have with the ollama devs behavior though, as that’s way more subjective.

scrubbles@poptalk.scrubbles.tech on 15 Oct 15:32 collapse

I like the look of exui, but is there a way to run it without torch or needing a GPU? I have tabby running on a separate computer, like SillyTavern I just want to connect to the API, not host it locally

brucethemoose@lemmy.world on 15 Oct 21:52 collapse

Nah, I should have mentioned it but exui is it’s own “server” like TabbyAPI.

Just run exui on the host that would normally serve tabby, and access the web ui through a browser.

If you need an API server, TabbyAPI fills that role.