LLMS Are Not Fun (orib.dev)
from codeinabox@programming.dev to programming@programming.dev on 30 Dec 2025 19:24
https://programming.dev/post/43220323

#programming

threaded - newest

Technus@lemmy.zip on 30 Dec 2025 19:29 next collapse

I’ve maintained for a while that LLMs don’t make you a more productive programmer, they just let you write bad code faster.

90% of the job isn’t writing code anyway. Once I know what code I wanna write, banging it out is just pure catharsis.

Glad to see there’s other programmers out there who actually take pride in their work.

wesker@lemmy.sdf.org on 30 Dec 2025 19:46 next collapse

It’s been my experience that the quality of code is greatly influenced by the quality of your project instructions file, and your prompt. And of course what model you’re using.

I am not necessarily a proponent of AI, I just found myself being reassigned to a team that manages AI for developer use. Part of my responsibilities has been to research how to successfully and productively use the tech.

Technus@lemmy.zip on 30 Dec 2025 21:25 collapse

But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

There’s a lot of busywork that I could see it being good for, like if you’re asked to generate 100 test cases for an API with a bunch of tiny variations, but that kind of work is inherently low value. And in most cases you’re probably better off using a tool designed for the job, like a fuzzer.

wesker@lemmy.sdf.org on 30 Dec 2025 21:41 collapse

But at a certain point, it seems like you spend more time babysitting and spoon-feeding the LLM than you do writing productive code.

I’ve found it pretty effective to not babysit, but instead have the model iterate on it’s instructions file. If it did something wrong or unexpected, I explain what I wanted it to do, and ask it to update it’s project instructions to avoid the pitfall in future. It’s more akin to calm and positive reinforcement.

Obviously YMMV. I am in charge of a large codebase of python cron automations, that interact with a handful of services and APIs. I’ve rolled a ~600 line instructions file, that has allowed me to pretty successfully use Claude to stand up from scratch full object-oriented clients, complete with dep injection, schema and contract data models, unit tests, etc.

I do end up having to make stylistic tweaks, and sometimes reinforce things like DRY, but I actually enjoy that part.

EDIT: Whenever I begin to feel like I’m babysitting, it’s usually due to context pollution and the best course is to start a fresh agent session.

Cyberflunk@lemmy.world on 31 Dec 2025 14:06 collapse

your experience isnt other peoples experience. just because you can’t get results doesnt mean the trchnology is invalid, just your use of it.

“skill issue” as the youngers say

AnarchistArtificer@slrpnk.net on 31 Dec 2025 15:16 next collapse

I’d rather hone my skills at writing better, more intelligible code than spend that same time learning how to make LLMs output slightly less shit code.

Whenever we don’t actively use and train our skills, they will inevitably atrophy. Something I think about quite often on this topic is Plato’s argument against writing. His view is that writing things down is “a recipe not for memory, but for reminder”, leading to a reduction in one’s capacity for recall and thinking. I don’t disagree with this, but where I differ is that I find it a worthwhile tradeoff when accounting for all the ways that writing increases my mental capacities.

For me, weighing the tradeoff is the most important gauge of whether a given tool is worthwhile or not. And personally, using an LLM for coding is not worth it when considering what I gain Vs lose from prioritising that over growing my existing skills and knowledge

Feyd@programming.dev on 31 Dec 2025 18:16 next collapse

It’s interesting that all the devs I already respected don’t use it or use it very sparingly and many of the devs I least respected sing it’s praises incessantly. Seems to me like “skill issue” is what leads to thinking this garbage is useful.

FizzyOrange@programming.dev on 31 Dec 2025 23:42 collapse

Everyone is talking past each other because there are so many different ways of using AI and so many things you can use it for. It works ok for some, it fails miserably for others.

Lots of people only see one half of that and conclude “it’s shit” or “it’s amazing” based on an incomplete picture.

The devs you respect probably aren’t working on crud apps and landing pages and little hacky Python scripts. They’re probably writing compilers and game engines or whatever. So of course it isn’t as useful for them.

That doesn’t mean it doesn’t work for people mocking up a website or whatever.

namingthingsiseasy@programming.dev on 01 Jan 2026 19:39 collapse

That’s certainly one possibility. But another possibility is that the people praise LLMs are not very good at judging whether the code it generates is of good quality or not…

itkovian@lemmy.world on 30 Dec 2025 19:30 next collapse

A simple, but succinct summary of the real cost of LLMs. Literally, everything human for something that is just a twisted reflection of the greed of the richest.

brucethemoose@lemmy.world on 30 Dec 2025 19:38 next collapse

If you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time. Micromanaging them, watching to preempt slop and derailment, is frustrating and rage-inducing.

Finetuning LLMs for niche tasks is fun. It’s explorative, creative, cumulitive, and scratches a ‘must optimize’ part of my brain. It feels like you’re actually building and personalizing something, and teaches you how they work and where they fail, like making any good program or tool. It feels you’re part of a niche ‘old internet’ hacking community, not in the maw of Big Tech.

Using proprietary LLMs over APIs is indeed soul crushing. IMO this is why devs who have to use LLMs should strive to run finetunable, open weights models where they work, even if they aren’t as good as Claude Code.

But I think most don’t know they exist. Or had a terrible experience with terrible ollama defaults, hence assume that must be what the open model ecosystem is like.

BlameThePeacock@lemmy.ca on 30 Dec 2025 19:49 next collapse

Improving your input, and the system message can also be part of that. There are multiple optimizations available for these systems that people aren’t really good at yet.

It’s like watching Grandma google “Hi, I’d like a new shirt” back in the day and then having her complain that she’s getting absolutely terrible search results.

brucethemoose@lemmy.world on 30 Dec 2025 20:02 collapse

Mmmmm. Pure “prompt engineering” feels soulless to me. And you have zero control over the endpoint, so changes on their end can break your prompt at any time.

Messing with logprobs and raw completion syntax was fun, but the US proprietary models took that away. Even sampling is kind of restricted now, and primitive compared to what’s been developed in open source.

ExLisper@lemmy.curiana.net on 31 Dec 2025 15:51 collapse

What he’s talking about is teaching a person and watching them grow, become better engineer and move on to do great things not tweaking some settings in a tool so it works better. How do people not understand that?

lIlIlIlIlIlIl@lemmy.world on 30 Dec 2025 19:50 next collapse

Ok, don’t use them…?

This could have been a facebook status

NaibofTabr@infosec.pub on 30 Dec 2025 21:00 collapse

The problem is that many employers are requiring employees to use them.

mindbleach@sh.itjust.works on 30 Dec 2025 20:05 next collapse

Experts who enjoy doing [blank] the hard way don’t enjoy the tool that lets novices do [blank] at a junior level.

Somehow this means the tool is completely worthless and nobody should ever use it.

justOnePersistentKbinPlease@fedia.io on 30 Dec 2025 20:27 collapse

Except it becomes more dangerous for a novice to use an LLM.

It will introduce vulnerabilities and issues that the novice will overlook.

wesker@lemmy.sdf.org on 30 Dec 2025 20:46 collapse

This is extremely valid.

The biggest reason I’m able to use LLMs efficiently and safely, is because of all my prior experience. I’m able to write up all the project guard rails, the expected architecture, call out gotchas, etc. These are the things that actually keep the output in spec (usually).

If a junior hasn’t already manually established this knowledge and experience, much of the code that they’re going to produce with AI is gonna be crap with varying levels of deviation.

justOnePersistentKbinPlease@fedia.io on 30 Dec 2025 21:03 collapse

How I guide the juniors under me is to have it generate singular methods to accomplish specific tasks, but not entire classes/files.

mindbleach@sh.itjust.works on 30 Dec 2025 21:30 collapse

You know it’s crazy, someone just told me that’s more dangerous than having them do nothing.

justOnePersistentKbinPlease@fedia.io on 30 Dec 2025 23:40 collapse

They use it with heavy oversight from the senior devs. We discourage its use and teach them the very basic errors it always produces as a warning not to trust it.

E.G. that ChatGPT will always dump all of the event handlers for a form in one massive method.

We use it within the scope of things we already know about.

Evotech@lemmy.world on 30 Dec 2025 20:14 next collapse

Disagree

xthexder@l.sw0.com on 31 Dec 2025 03:44 collapse

Elaborate?

Evotech@lemmy.world on 31 Dec 2025 07:10 collapse

Just making a whole stack in an hour is pretty fun. You can just have an idea and a couple hours later I can have a database, backend and frontend running in containers locally doing exactly what I wanted.

That’s pretty fun

Basically anything you want to make you can just make now, you don’t need to know anything about it before hand

SchwertImStein@lemmy.dbzer0.com on 31 Dec 2025 12:04 collapse

you couldn’t do it in a couple hours without llm?

Evotech@lemmy.world on 31 Dec 2025 12:24 collapse

A full stack? No, I’m not a programmer

myfunnyaccountname@lemmy.zip on 30 Dec 2025 21:06 next collapse

If it allows to kick out code faster to meet whatever specs/acceptance criteria laid out before me, fine. The hell do I care if the code is good or bad. If it works, it works. My company doesn’t give af about me. I’m just a number. No matter how many “we are family” speeches they give. Or try to push the “we are all a team and will win”….we aren’t all a team. Why should I care more than “does it work”. As long as profits go up, the company is happy. They don’t care how good or pretty my code is.

wizardbeard@lemmy.dbzer0.com on 30 Dec 2025 21:38 collapse

Tell me again how you’ve never become the subject matter expert on something simply because you were around when it was built.

Or had to overhaul a project due to a “post-live” requirements change a year later.

I write “good enough” code for me, so I don’t want to take a can opener to my head when I inevitably get asked to change things later.

It also lets me be lazier, as 9 times out of 10 I can get most of my code from a previous project and I already know it front to back. I get to fuck about and still get complex stuff out fast enough to argue for a raise.

myfunnyaccountname@lemmy.zip on 31 Dec 2025 01:31 collapse

Been the sme and completely architected and implemented the entire middleware server farm for my last company. First in ibm after taking it over from someone else that started it, just a here you go takeover. Then moving from an ibm shop to oracle, cause the vp wanted a gold star and wouldn’t listen to anyone. I left when they were moving to red hat when the next vp came in and wanted their gold star. Little over 400 servers. Been there done that.

lung@lemmy.world on 30 Dec 2025 22:16 next collapse

Argument doesn’t check out. You can still manage people, and they can use whatever tools make them productive. Good understanding of the code & ability to pass PR reviews isn’t going anywhere, nor is programmer skill

Avicenna@programming.dev on 02 Jan 2026 05:33 collapse

Not unless the claims that companies are hiring less junior devs in favour of LLMs with senior coder oversight. If this is indeed a true trend and AGI is not achieved, we might have senior coder shortage in future.

lung@lemmy.world on 02 Jan 2026 20:09 collapse

I think this is true to some degree, but not exclusively true; new grads still get jobs. However, I think it’ll take some time for universities to catch up with the changes they need to make to refocus on architecture, systems design & skilled use of LLMs

My opinion is that the demand for software is still dramatically higher than what can be achieved by hiring every single senior dev + LLM. I.e. there will need to be more people doing it in the future regardless of efficiency gains

codeinabox@programming.dev on 30 Dec 2025 22:23 next collapse

I use AI coding tools, and I often find them quite useful, but I completely agree with this statement:

And if you think of LLMs as an extra teammate, there’s no fun in managing them either. Nurturing the personal growth of an LLM is an obvious waste of time.^___^

At first I found AI coding tools like a junior developer, in that it will keep trying to solve the problem, and never give up or grow frustrated. However, I can’t teach an LLM, yes I can give it guard rails and detailed prompts, but it can’t learn in the same way a teammate can. It will always require supervision and review of its output. Whereas, I can teach a teammate new or different ways to do things, and over time their skills and knowledge will grow, as will my trust in them.

AlecSadler@lemmy.blahaj.zone on 31 Dec 2025 04:57 next collapse

+10000 to this comment

footfaults@lemmygrad.ml on 31 Dec 2025 05:07 next collapse

This has been my experience too.

[deleted] on 31 Dec 2025 12:50 collapse
.
realitista@lemmus.org on 30 Dec 2025 23:19 next collapse

This is exactly how I feel about LLM’s. I will use them if I have to to get something done that would be time consuming or tedious. But I would never willingly sign up for a job where that’s all it is.

Cyberflunk@lemmy.world on 31 Dec 2025 14:04 next collapse

Nurturing the personal growth of an LLM is an obvious waste of time.

i think this is short sighted. engineers will spend years refineing nvim, tmux,zsh to be the tool they want. the same applies here. op is framing it like its a human, its a tool. learn the tool, understand why ot works the way it does. just like emacs or ripgrep or something.

BatmanAoD@programming.dev on 31 Dec 2025 15:33 collapse

I think you’re misunderstanding that paragraph. It’s specifically explaining how LLMs are not like humans, and one way is that you can’t “nurture growth” in them the way you can for a human. That’s not analogous to refining your nvim config and habits.

voodooattack@lemmy.world on 31 Dec 2025 21:11 collapse

This person is right. But I think the methods we use to train them are what’s fundamentally wrong. Brute-force learning? Randomised datasets past the coherence/comprehension threshold? And the rationale is that this is done for the sake of optimisation and the name of efficiency? I can see that overfitting is a problem, but did anyone look hard enough at this problem? Or did someone just jump a fence at the time and then everyone decided to follow along and roll with it because it “worked” and it somehow became the golden standard that nobody can question at this point?

bitcrafter@programming.dev on 31 Dec 2025 22:00 next collapse

The researchers in the academic field of machine learning who came up with LLMs are certainly aware of their limitations and are exploring other possibilities, but unfortunately what happened in industry is that people noticed that one particular approach was good enough to look impressive and then everyone jumped on that bandwagon.

voodooattack@lemmy.world on 01 Jan 2026 06:39 collapse

That’s not the problem though. Because if I apply my perspective I see this:

Someone took a shortcut because of an external time-crunch, left a comment about how this is a bad idea and how we should reimplement this properly later.

But the code worked and was deployed in a production environment despite the warning, and at that specific point it transformed from being “abstract procedural logic” to being “business logic”.

VoterFrog@lemmy.world on 31 Dec 2025 22:23 collapse

The generalized learning is usually just the first step. Coding LLMs typically go through more rounds of specialized learning afterwards in order to tune and focus it towards solving those types of problems. Then there’s RAG, MCP, and simulated reasoning which are technically not training methods but do further improve the relevance of the outputs. There’s a lot of ongoing work in this space still. We haven’t seen the standard even settle yet.

voodooattack@lemmy.world on 01 Jan 2026 06:18 collapse

Yeah, but what I meant was: we took a wrong turn along the way, but now that it’s set in stone, sunk cost fallacy took over. We (as senior developers) are applying knowledge and approaches obtained through a trap we would absolutely caution and warn a junior against until the lesson sticks, because it IS a big deal.

Reminds me of this gem:

<img alt="" src="https://lemmy.world/pictrs/image/d67424d7-54a2-4782-9286-db63a587391d.png">

www.monkeyuser.com/2018/final-patch/