LLM's hallucinating or taking our jobs?
from monounity@lemmy.world to programming@programming.dev on 13 Dec 19:26
https://lemmy.world/post/40154202

Lemmings, I was hoping you could help me sort this one out: LLM’s are often painted in a light of being utterly useless, hallucinating word prediction machines that are really bad at what they do. At the same time, in the same thread here on Lemmy, people argue that they are taking our jobs or are making us devs lazy. Which one is it? Could they really be taking our jobs if they’re hallucinating?

Disclaimer: I’m a full time senior dev using the shit out of LLM’s, to get things done at a neck breaking speed, which our clients seem to have gotten used to. However, I don’t see “AI” taking my job, because I think that LLM’s have already peaked, they’re just tweaking minor details now.

Please don’t ask me to ignore previous instructions and give you my best cookie recipe, all my recipes are protected by NDA’s.

Please don’t kill me

#programming

threaded - newest

Quetzalcutlass@lemmy.world on 13 Dec 19:43 next collapse

It takes jobs because executives push it hoping to save six figures per replaced employee, not because it’s actually better. The downsides of AI-written code (that it turns a codebase into an unmaintainable mess whose own “authors” won’t have a solid mental model of it since they didn’t actually write it) won’t show up immediately, only when something breaks or needs to be changed.

It’s like outsourcing - it looks promising and you think you’ll save a ton of money, until months or years later when the tech debt comes due and nobody in the company knows how to fix it. Even if the code was absolutely flawless, you still need to know it to maintain it.

monounity@lemmy.world on 13 Dec 19:53 collapse

So you’re not in the “they’re only hallucinating” camp, I take it? I actually start out with a solid mental model of what I want to do, ending up with small unit tested classes/functions that all pass code review. It’s not like I just tell an “AI” to write the whole thing and commit and push without reviewing myself first.

Edit: and as I commented elsewhere in this thread, the way I’m using LLM’s, no one could tell that an LLM ever was involved.

southernbeaver@lemmy.world on 13 Dec 20:10 next collapse

I wouldn’t listen to anyone who deal in absolutes. Could be a sith.

But for real. My manager has explained it best. It’s a tool, you can use to enhance your work. That’s it. It won’t replace good coders but it will replace bad ones because the good ones will be more efficient

monounity@lemmy.world on 13 Dec 20:15 next collapse

Exactly, it’s just another tool in the toolbox. And if we can use that tool to weed out the (sometimes hilariously bizarre) bad devs, I’m all for it.

henfredemars@infosec.pub on 13 Dec 20:38 collapse

I do have a concern for the health of the overall ecosystem though. Don’t all good devs start out as bad ones? There still needs to be a reasonable on-ramp for these people.

monounity@lemmy.world on 13 Dec 20:53 collapse

That’s a valid concern, but I really don’t think that we should equate new devs with seniors that are outright bad. Heck, I’ve worked with juniors that scared the hell out of me because they were so friggin good, and I’ve worked with “seniors” who didn’t want to do loops because looping = bad performance.

partial_accumen@lemmy.world on 13 Dec 20:36 collapse

It won’t replace good coders but it will replace bad ones because the good ones will be more efficient

Here’s where we just start touching on the second order problem. Nobody starts as a good coder. We start making horrible code because we don’t know very much, and though years of making mistakes we (hopefully) improve, and become good coders.

So if AI “replaces bad ones” we’ve effectively ended the pipeline for new coders to enter the workforce. This will be fine for awhile as we have two to three generations of coders that grew up (and became good coders) prior to AI. However, that most recent generation that was pre-AI is that last one. The gate is closed. The ladder pulled up. There won’t be any more young “bad ones” that grow up into good ones. Then the “good ones” will start to die off or retire.

Carried to its logical conclusion, assuming nothing else changes, then there aren’t any good ones, nor will there every be again.

matengor@lemmy.ml on 13 Dec 20:54 next collapse

But inexperienced coders will start to use LLMs a lot earlier than the experienced ones do now. I get your point, but I guess the learning patterns for junior devs will just be totally different while the industry stays open for talent.

At least I hope it will and it will not only downsize to 50% of the human workforce.

partial_accumen@lemmy.world on 13 Dec 21:00 collapse

But inexperienced coders will start to use LLMs a lot earlier than the experienced ones do now.

And unlike you that can pick out a bad method or approach just by looking at the LLM output where you correct it, the inexperienced coder will send the bad code right into git if they can get it to pass a unit test.

I get your point, but I guess the learning patterns for junior devs will just be totally different while the industry stays open for talent.

I have no idea what the learning path is going to look like for them. Besides personal hobby projects to get experience, I don’t know who will give them a job when what they produce from their first efforts will be the “bad coder” output that gets replaced by an LLM and a senior dev.

At least I hope it will and it will not only downsize to 50% of the human workforce.

I’ve thought about this many times, and I’m just not seeing a path for juniors. Given this new perspective, I’m interested to hear if you can envision something different than I can. I’m honestly looking for alternate views here, I’ve got nothing.

FishFace@piefed.social on 13 Dec 23:18 collapse

Just like they would with their own code. So they’ll be an inexperienced dev, but faster.

monounity@lemmy.world on 13 Dec 21:01 next collapse

At least where I work, we’re actively teaching the junior devs on best practices and patterns that are tried and true. Like no code copying, small classes with one task, small methods with one task, separating logic from the database/presentation, unit testing etc.

Edit: actively, not actually

southernbeaver@lemmy.world on 13 Dec 22:37 collapse

I agree. In the long run it will hurt everyone.

henfredemars@infosec.pub on 13 Dec 20:33 collapse

It sounds to me like you’ve got a good head on your shoulders and you’re actually using the tool effectively. You’re keeping yourself in control and using it to expand your own capabilities, not offloading your job responsibilities, which is how more inept management views AI.

litchralee@sh.itjust.works on 13 Dec 19:50 next collapse

To many of life’s either-or questions, we often struggle when the answer is: yes. That is to say, two things can hold true at the same time: 1) LLMs can result in job redundancies, and 2) LLMs hallucinate results.

But if we just stopped the analysis there, we wouldn’t have learned anything. To use this reality to terminate any additional critical thinking is, IMO, wholly inappropriate for solving modern challenges, and so we must look into the exact contours of how true these statements are.

To wit, LLM-induced job redundancies could come from skills which have been displayed by the things LLMs can do well. For example, typists lost their jobs when businesspeople were expected to operate a typewriter in their own. And when word processing software came into existence for the personal computer, a lot of typewriter companies folded or were consolidated. In the case of LLMs, consider that people do use them to proofread letters for spelling and grammar.

Technologically, we’ve had spell-check software for a while, but grammar was harder. In turn, an industry appeared somewhere in the late 2000s or early 2010s to develop grammar software. Imagine how the software devs at these companies (eg Grammarly) might be in a precarious situation, if an LLM can do the same work. At least with grammar checking, even the best grammar software still struggles with some of the more English sentence constructions, so if an LLM isn’t 100% perfect, that’s still acceptable. I can absolutely see the fortunes of grammar software companies suffering due to LLMs, and that means those software devs are indeed threatened by what LLMs can do.

For the second statement, it is trivial to find examples of LLMs hallucinating, sometimes spectacularly or seemingly ironic (although an LLM would be hard-pressed to simulate the intention of irony, I would think). In some fields, such hallucinations are career-limiting moves for the user, such as if an LLM was used to advise on pharmaceutical dosage, or used to draft a bogus legal appeal and the judge is not amused. This is very much a FAFO situation, where somehow the AI/LLM companies burden none of the risk and all of the upside. It’s like how autonomous driving automotive companies are somehow allowed to do public road tests of their beta-quality designs, but the liability for crashes still befalls the poor sod behind the wheel. They just keep yapping about how those crashes are all “human error” and “an autonomous car is still safer”.

But I digress.

My point is that LLMs have quite a lot of capabilities, and people make a serious mistake when they assume its incompetence in one capacity reflects its competency in another. This is not unlike how humans assess other humans, such as how a record-setting F1 driver would probably be a very good chauffeur for a limousine company. But whereas humans have patterns that suggest they might be good (or bad) at something, LLMs are a creature unlike anything else.

I personally am not bullish on additional LLM improvements, and think the next big push will require additional academic research, being nowhere near commercialization. But even I have to recognize that some very specific tasks are decent using an LLM. I just don’t think that’s good enough for me to use them, given their subscription costs, the possibility of becoming dependent, and being too niche.

henfredemars@infosec.pub on 13 Dec 20:35 collapse

It’s rare to see such a complete and well-thought-out response anywhere on the Internet. Great job in capturing the nuance. It’s a powerful and often-misused tool.

Ledivin@lemmy.world on 13 Dec 20:01 next collapse

AI hallucinates constantly, that’s why you still have a job - someone has to know what they’re doing to sort out the wheat from the chaff.

It’s also taking a ton of our entry-level jobs, because you can do the work you used to do and the work of the junior devs you used to have without breaking a sweat.

monounity@lemmy.world on 13 Dec 20:10 collapse

But that’s point of my post, how can they take junior devs jobs if they’re all hallucinating constantly? And let me tell you, we’re hiring juniors.

henfredemars@infosec.pub on 13 Dec 20:31 next collapse

I think your question is covered by the original commentator. They do hallucinate often, and the job does become using the tool more effectively which includes capturing and correcting those errors.

Naturally, greater efficiency is an element of job reduction. They can be both hallucinating often and creating additional efficiency that reduces jobs.

monounity@lemmy.world on 13 Dec 20:42 collapse

But they’re not hallucinating when I use them? Are you just repeating talking points? It’s not like the code I write is somehow connected with an AI, I just bounce my code off of an LLM. And when I’m done reviewing each line, adding stuff, checking design docs etc, no one could tell that an LLM was ever used for creating that piece of code in the first place. To this date I’ve never failed a code review on “that’s AI slop, please remove”.

I’d argue that greater efficiency sometimes gives me more free time, hue hue

henfredemars@infosec.pub on 13 Dec 20:46 collapse

And that’s fantastic! That’s what technology is supposed to do IMHO - Give you more free time because of that efficiency. That’s technology making life better for humans. I’m glad that you’re experiencing that.

If they’re not hallucinating as you use them, then I’m afraid we just have different experiences. Perhaps you’re using better models or you’re using your tools more effectively than I am. In that case, I must respect that you are having a different and equally legitimate experience.

Ledivin@lemmy.world on 13 Dec 20:39 collapse

And let me tell you, we’re hiring juniors.

Sure, nobody has stopped hiring, but everyone has slowed down, and we’ve seen something like 5% of our workforce laid off over the past year. FAANG has hired less than one fifth the junior devs as previous years.

monounity@lemmy.world on 13 Dec 20:45 collapse

Maybe we live and work in different parts of the world?

Ledivin@lemmy.world on 13 Dec 20:57 collapse

That’s certainly possible - the only data I have is US-based, primarily from SF and NYC, but our smaller hubs are also following similar trends.

monounity@lemmy.world on 13 Dec 21:19 collapse

It’s bad over there, isn’t it? In your opinion, are LLM’s causing the downward trend in the job market?

Ledivin@lemmy.world on 13 Dec 21:23 collapse

Depends what you mean. Hiring at entry-levels has absolutely stalled, but I’ve been at the same shop for 5-10 years, so I’m mostly insulated. The shops that use AI well and those that don’t are going to be very obvious over the next few years. I’m definitely worried for the next 5-10 years of our careers, our jobs have changed SO much in the past year.

monounity@lemmy.world on 13 Dec 21:38 collapse

Where I live, they keep pushing the retirement age upwards, so I’m looking at working until I die at the ripe age of 79 or something

codeinabox@programming.dev on 13 Dec 20:54 next collapse

Based on my own experience of using Claude for AI coding, and using the Whisper model on my phone for dictation, for the most part AI tools can be very useful. Yet there is nearly always mistakes, even if they are quite minor at times, which is why I am sceptical of AI taking my job.

Perhaps the biggest reason AI won’t take my job is it has no accountability. For example, if an AI coding tool introduces a major bug into the codebase, I doubt you’d be able to make OpenAI or Anthropic accountable. However if you have a human developer supervising it, that person is very much accountable. This is something that Cory Doctorow talks about in his reverse-centaur article.

“And if the AI misses a tumor, this will be the human radiologist’s fault, because they are the ‘human in the loop.’ It’s their signature on the diagnosis.”

This is a reverse centaur, and it’s a specific kind of reverse-centaur: it’s what Dan Davies calls an “accountability sink.” The radiologist’s job isn’t really to oversee the AI’s work, it’s to take the blame for the AI’s mistakes.

count_dongulus@lemmy.world on 13 Dec 21:09 next collapse

Oh you mean like how AI for driving changed cars so nobody drives themselves any more?

abcdqfr@lemmy.world on 13 Dec 21:23 next collapse

AI/LLM for coding assistance is the shit for grunt work and beyond. It is excellent at summaries, information retrieval, etc. Logic and reasoning have strides to make yet. “Generarive AI” for images, video, voice work, plenty of valid reasons to hate AI there. Not enough individuals differentiate the two, it’s easier to blanket the advent and just bandwagon on “AI bad”.

rozodru@pie.andmc.ca on 13 Dec 21:35 next collapse

Since I deal with this first hand with clients I will tell you it doesn’t have to be good to be embraced. as far as the managers and CEOs know, they don’t know. LLMs with vibe coders CAN and routinely DO produce something now if that something is good and works is another thing and in most cases it doesn’t work in the long term.

Managers and up only see the short term and in the short term vibe coding and LLMs work. in the long term they don’t. they break, they don’t scale, they’re full of exploits. But short term? saving money in the short term? that’s all they care about right now until they don’t.

AnitaAmandaHuginskis@lemmy.world on 13 Dec 21:36 next collapse

The key is how you use LLMs and which LLMs you use for what.

If you know how to make use of them properly, know their strengths, weaknesses, and limitations, LLMs are an incredibly useful tool that sucks up productivity from other people (and their jobs) and focus productivity on you, so to speak.

If you do not know how to make use of them – then yes, they suck. For you.

It’s not really that much different from any other tool. Know how to use version control? If not it does not make you a bad dev per se. If yes, it probably makes you a bit more organized.

Same with IDEs, using search engines, being able to read documentation properly. All of that is not required but knowing how to make use of such tools and having the skills add up.

Same with LLMs.

Flamekebab@piefed.social on 13 Dec 21:40 next collapse

I’m perplexed as to why there’s so much advertising and pushing for AI. If it was so good it would sell itself. Instead it’s just sort of a bit shit. Not completely useless but in need of babysitting.

If I ask it to do something there’s about a 30% chance that it made up the method/specifics of an API call based on lots of other similar things. No, .toxml() doesn’t exist for this object. No, I know that .toXml() exists but it works differently from other libraries.

I can make it just about muddle through but mostly I find it handy for time intensive grunt work (convert this variable to the format used by another language, add another argparser argument for the function’s new argument, etc..).

It’s just a bit naff. It cannot be relied on to deliver consistent results and if a computer can’t be consistent then what bloody good is it?

monounity@lemmy.world on 13 Dec 22:10 collapse

I do wonder why so many devs seem to have so wildly different experiences? You seem to have LLM’s making up stuff as they go, while I’m over here having it create mostly flawless code over and over again.

Is it different behavior for different languages? Is it different models, different tooling etc?

I’m using it for C#, React (Native), Vue etc and I’m using the web interface of one of the major LLM’S to ask questions, pasting the code of interfaces, sometimes whole React hooks, components etc and I get refactored or even new components back.

I also paste whole classes or functions (anonymized) to get them unit tested. Could you elaborate on how you’re using LLM’S?

Flamekebab@piefed.social on 13 Dec 22:15 next collapse

I really don’t feel like getting in depth about work on the weekend, sorry.

MoogleMaestro@lemmy.zip on 13 Dec 22:21 next collapse

Yeah man, I was going to say there’s already too much talking about work on a Saturday in this thread than I like. 💢

monounity@lemmy.world on 13 Dec 22:54 collapse

Naaw, just when things started to get interesting…

Flamekebab@piefed.social on 13 Dec 23:41 collapse

We’re in the middle of a release and last week was a lot. I shouldn’t have stepped into the thread!

thedeadwalking4242@lemmy.world on 13 Dec 22:31 next collapse

It’s the models that make the difference. Up until like Nov it’s all been really shit

monounity@lemmy.world on 13 Dec 23:51 collapse

But I’ve been doing this for years.

FizzyOrange@programming.dev on 13 Dec 22:37 collapse

It’s the language and the domain. They work pretty well for the web and major languages (like top 15).

As soon as you get away from that they get drastically worse.

But I agree they’re still unambiguously useful despite their occasional-to-regular bullshitting and mistakes. Especially for one-off scripts, and blank-page starts.

bluemoon@piefed.social on 13 Dec 21:43 next collapse

that’s you doing exactly the thing told in the ‘reverse centaur’ stage talk?

BootLoop@sh.itjust.works on 13 Dec 21:44 next collapse

It’s because the “AI bad” crowd on Lemmy are dumb and annoying, but they are vocal. AI is an incredibly useful tool. As a senior dev I also use the shit out of LLMs to speed up my work. They do not hallucinate, at least during coding sessions. And I believe they will take junior level front end developers out of the market. Senior developers, who know how to review the code that is generated, will be safe.

hubobes@sh.itjust.works on 13 Dec 22:37 next collapse

I think hallucination happens most often if context or knowledge is missing. I have seen coding assistants write code that made no sense, I then helped them along go get back on the right path by providing context.

I also extensively use AI to code in a similar way you do (tbf I am, to this day, not sure if I am actually faster and how it affects my ability to code).

Overall I think the answer is somewhere in the middle, they hallucinate and need some help when they do. But with proper context they work quite well.

dangling_cat@piefed.blahaj.zone on 13 Dec 22:45 next collapse

Both are true.
1. Yes, they hallucinate. For coding, especially when they don’t have the latest documentation, they just invent APIs and methods that don’t exist.
2. They also take jobs. They pretty much eliminate entry-level programmers (making the same mistakes while being cheaper and faster).
3. AI-generated code bases are not maintainable in the long run. They don’t reliably reuse methods, only fix the surface bugs, not fundamental problems, causing code base bloating and, as we all know, more code == more bugs.
4. Management uses Claude code for their small projects and is convinced that it can replace all programmers for all projects, which is a bias they don’t recognize.

Is it a bubble? Yes. Is it a fluke? Welllllllll, not entirely. It does increase productivity, given enough training, learning its advantages and limitations.

monounity@lemmy.world on 13 Dec 23:10 next collapse

I don’t think we’re using LLM’S in the same way?

As I’ve stated several times elsewhere in this thread, I more often than not get excellent results, with little to no hallucinations. As a matter of fact, I can’t even remember the last time it happened when programming.

Also, they way I work, no one could ever tell that I used an LLM to create the code.

That leaves us your point #4, and what the fuck? Why do upper management always seem to be so utterly incompetent and without a clue when it comes to tech? LLM’S are tools, not a complete solution.

Feyd@programming.dev on 13 Dec 23:10 collapse

It does increase productivity, given enough training, learning its advantages and limitations.

People keep saying this based on gut feeling, but the only study I’ve seen showed that even experienced devs that thought they were faster were actually slower.

monounity@lemmy.world on 13 Dec 23:22 collapse

Slower?

Is getting a whole C# class unit tested in minutes slower, compared to setting up all the scaffolding, test data etc, possibly taking hours?

Is getting a React hook, with unit tests in minutes slower than looking up docs, hunting on Stack Overflow etc and slowly creating the code by hand over several hours?

Are you a dev yourself, and in that case, what’s your experience using LLM’S?

Feyd@programming.dev on 13 Dec 23:43 collapse

I find it interesting that all these low participation/new accounts have come out of the woodwork to pump up AI in the last 2 weeks. I’m so sick of having this slop clogging up my feed. You’re literally saying that your vibes are more important than actual data, just like all the others. I’m sorry, but its not.

My experience btw, is that llms produce hot garbage that takes longer to fix than if I wrote it myself, and all the people that say “but it writes my unit tests for me!” are submitting garbage unit tests, that often don’t even exercise the code, and are needlessly difficult to maintain. I happen to think tests are just as important as production code so it upsets me.

The biggest thing that the meteoric rise of developers using LLMs has done for me is confirm just how many people in this field are fucking terrible at their jobs.

monounity@lemmy.world on 14 Dec 00:17 collapse

Have you read anything I’ve written on how I use LLM’s? Hot garbage? When’s the last time you actually used one?

Here are some studies to counter your vibes argument.

55.8% faster: arxiv.org/abs/2302.06590

These ones indicate positive effects: arxiv.org/abs/2410.12944 arxiv.org/abs/2509.19708

m532@lemmy.ml on 13 Dec 23:14 next collapse

Its the typical “the enemy is both weak and strong” contradiction, which nazi propaganda often ran into since their ideology was unscientific and illogical

PoY@lemmygrad.ml on 13 Dec 23:15 collapse

both are true.