Replaced $40/month in AI API subscriptions with self-hosted Ollama + n8n
from quickbitesdev@discuss.tchncs.de to selfhosted@lemmy.world on 12 Apr 18:42
https://discuss.tchncs.de/post/58337806

Quick post about a change I made that’s worked out well.

I was using OpenAI API for automations in n8n — email summaries, content drafts, that kind of thing. Was spending ~$40/month.

Switched everything to Ollama running locally. The migration was pretty straightforward since n8n just hits an HTTP endpoint. Changed the URL from api.openai.com to localhost:11434 and updated the request format.

For most tasks (summarization, classification, drafting) the local models are good enough. Complex reasoning is worse but I don’t need that for automation workflows.

Hardware: i7 with 16GB RAM, running Llama 3 8B. Plenty fast for async tasks.

#selfhosted

threaded - newest

TropicalDingdong@lemmy.world on 12 Apr 18:52 next collapse

Any quality difference?

TheMightyCat@ani.social on 12 Apr 19:23 collapse

Depending what OP was using before but going from something like GPT5.2 to LLama 3 8B will be a massive difference (Although OP says to use it only for basic tasks so that does offset it)

LLama 3 already being a very old model doesn’t help either

I run Qwen3.5-35B-A3B-AWQ-4bit which while leagues ahead of LLama 3 8B still is a very noticeable difference.

This is not to say open source is bad, if one had the resources to run something like Qwen3.5-397B-A17B it would also be up there.

[deleted] on 12 Apr 19:04 next collapse
.
Shady_Shiroe@lemmy.world on 12 Apr 19:05 next collapse

I only ever use my local ai for home assistant voice assistant on my phone, but it’s more of a gimmick/party trick since I only have temperatures sensors currently (only got into ha recently) and it can’t access WiFi so it’s just quietly sitting unloaded on my truenas server

kambusha@sh.itjust.works on 12 Apr 19:19 next collapse

What’s the model name to pull?

0ndead@infosec.pub on 12 Apr 19:42 collapse

Free bullshit generator