Are there programmers that still don't use AI?
from CodenameDarlen@lemmy.world to programming@programming.dev on 25 Feb 19:00
https://lemmy.world/post/43573066

Four months ago I asked if and how people used AI here in this community (lemmy.world/post/37760851).

Many people said that didn’t use it, or used only for consulting a few times.

But in those 4 months AIs evolved a lot, so I wonder, is there people who still don’t use AI daily for programming?

#programming

threaded - newest

Flaqueman@sh.itjust.works on 25 Feb 19:08 next collapse

I use it to get examples of how to use certain functions if I’m not clear how to use them. I don’t use AI to write code.

aev_software@programming.dev on 25 Feb 21:11 collapse

That’s where I am, too. I let mine find example code of how some feature should be used. And it’s really sad how often they make something up out of thin air, and how little regret they show for wasting my time.

eclipse7@feddit.nu on 25 Feb 19:11 next collapse

Web dev here, never used it :) I like to think for myself

PonyOfWar@pawb.social on 25 Feb 19:14 next collapse

But in those 4 months AIs evolved a lot

Has it really? I don’t feel like it’s much different for programming compared to 4 months ago.

CodenameDarlen@lemmy.world on 25 Feb 19:20 collapse

Yes, it evolved almost exponentially in these 4 months. It’s just bizarre what recent models can do and how consistent they do it.

If you never tried it, of course, you won’t know the difference. But for those who tried surely saw a huge improvement.

PonyOfWar@pawb.social on 25 Feb 20:04 collapse

It’s not that I’ve never tried it, I’ve dabbled in it consistently over the last few years. If you had said there was a major difference compared to 2 years or maybe even a year ago, sure. In the last 4 months, I guess we’ve gotten stuff like Claude 4.6, which saw an increase in coding performance by 2.5% according to SWE benchmarks. An improvement, sure, but certainly not an exponential one and not one which will fix the fundamental weaknesses of AI coding. Maybe I’m out of the loop though, so I’m curious, what are those exponential improvements you’ve seen over the last 4 months? Any concrete models or tools?

CodenameDarlen@lemmy.world on 25 Feb 20:27 collapse

I decided to try Qwen 3.5 Plus via Qwen Code CLI (Gemini CLI fork) and it’s bizarre what it can do.

It can figure out when it’s struggling to something, look on the internet for questions and docs to understand things better. It takes a lot of actions by itself, not like that bad models from 4 months ago that gets stuck on endless thinking and tweaking and never fix anything.

Recent models are thinking each time more like human programmers.

Staden@piefed.blahaj.zone on 25 Feb 19:16 next collapse

Programmers working with obscure languages. LLMs will give broken code and hallucinate stuff a lot more in these cases. Also, if you already dominate the basics for your language and can quickly search examples of uses for unknown functions/methods, LLMs become mostly useless.

kindnesskills@literature.cafe on 25 Feb 19:24 next collapse

Of course.

My reasons for not using AI are the same as they were four months ago and will be the same in four months, regardless of what the models can or can’t do.

Ask again in four years.

CodenameDarlen@lemmy.world on 25 Feb 19:29 collapse

What are your reasons?

The place you work don’t force you to use it?

I’ve been noticing all companies are forcing devs to use AIs to be more productive, even for simple things like write git commits.

kindnesskills@literature.cafe on 25 Feb 19:50 collapse

I noticed how quickly my own skills started deteriorating when trying to work with it. I’m trying to build my skills, not outsource them.

I also don’t love the environmental impact, nor the immorality of how they got/get their training sets for the base models.

If my work tried to force me to use it, I would be looking to change employer. Or lie and say I use it. But our AI use is heavily regulated and generally disencouraged, so luckily no issues there.

CodenameDarlen@lemmy.world on 25 Feb 20:06 collapse

I don’t think your code being used for training is a concern anymore. They’ll eventually keep finding new codes until it reaches its peak. Refusing to share your code for training will just postpone the inevitable, AI code will improve to its peak sooner or later.

kindnesskills@literature.cafe on 25 Feb 20:27 collapse

You replied to only one of my points, and that’s not even what I said…

They train new models on base models, and I’m talking about how they scraped the internet without permission or how websites sold their users data without compensation and how no one was ever given any opportunity to opt out of sharing your work and your words to train these base models on.

Without that grand scale theft we would have no base models anywhere near what we have now.

I’m not opposed to willingly sharing, I’m opposed to profiting from stealing.

CodenameDarlen@lemmy.world on 25 Feb 20:34 collapse

Your mistake is to think that I want to prove something, I don’t want to mention all your points, this is just a comment, not a scientific discussion.

skip0110@lemmy.zip on 25 Feb 19:26 next collapse

I have consistently been trying to use AI for the actual tasks I need to complete for work over the last year or so (we are gently encouraged to try it, but thankfully not forced). I have found it to be “successful” at maybe 1 in 10 tasks I give it. Even when successful, the code quality is so low I edit heavily before it’s pushed and attributed to me.

I think the problem I have is I rarely work on boilerplate stuff.

Piatro@programming.dev on 25 Feb 19:31 next collapse

AI has evolved a lot, but the regulatory environment hasn’t in some industries like health where this shit can’t go anywhere near it. Of course there are people who haven’t used it.

emb@lemmy.world on 25 Feb 19:33 next collapse

I’ve come around on it somewhat at work. Recent models really are getting pretty impressive. It’s at the point where I can tell it to read a Jira ticket and implement it, and for simple ones it basically just does it. I’m not sure it’s worth the massive environmental and infrastructures detriments (or rather, I’m pretty sure it’s not), but it’s definitely a productivity boost.

It’s also creating cognitive debt tho - every change it does for me automagically is one I don’t have to think about and ‘earn’ myself. You could argue the AI compensates for that by then explaining the code for you, but I think it will lead to some bad results in the mid-long term.

For any personal programming, I don’t/wouldn’t use it, beyond just replacing Google searches maybe. It defeats the fun of it, and cost money on top of that.

RalfWausE@feddit.org on 25 Feb 19:35 next collapse

Yeah, here… I am a jack of all trades, general purpose computer guy at a small company which also includes development. Haven’t and will never use that abomination.

hesh@quokk.au on 25 Feb 19:37 next collapse

I don’t use AI for programming ever

BentiGorlich@gehirneimer.de on 25 Feb 19:57 next collapse

My boss wants to make me use it, but I mever found a good use case for it. And with how things are turning out it just is a no go for me. I may have to use it in my job someday (webdev) but for now I am good without it

rustyfemboy@lemmy.blahaj.zone on 25 Feb 20:12 next collapse

Nope, still not using AI and hopefully never will. Writing code is actually pretty fun so why would I want to outsource an AI to do that?

thedeadwalking4242@lemmy.world on 25 Feb 20:14 next collapse

I use it at work because my colleagues only use it so it’s the only way I can deal with the LLM slop without total killing myself. And it’s horrendously bad still. I fucking hate it. Makes the worst fucking decisions.

I’m considering a career change honestly. I can’t stand this shit anymore.

CodenameDarlen@lemmy.world on 25 Feb 20:18 collapse

which model do you use?

thedeadwalking4242@lemmy.world on 25 Feb 20:20 collapse

Opus 4.6 Gemini 3 pro

Name em Ive tried em. Its all so sub par for anything beyond a one off script.

People have the impression they should be outputting 10x with them so they abuse them into doing more “thinking” Then they should.

Edit: it’s fucked the work culture more then it already was fucked when it comes to what defines software quality and expertise

shrek_is_love@lemmy.ml on 25 Feb 20:15 next collapse

I’m a programmer who’s never used AI. Instead of spending my time correcting and confirming the correctness of generated code, I’d rather figure it out for myself. I think this helps me reinforce my skills and understanding in the long term. Plus it’s one less thing to rely on/maintain/pay for.

mrmaplebar@fedia.io on 25 Feb 20:16 next collapse

Never used it. Don't see any reason to. I just type stuff in my IDE. Works like a charm.

Most of the time I'm not writing large volumes or boilerplate code or anything, I'm making precise changes to solve specific problems. I doubt there's any LLM that can do that more effectively than a programmer with real knowledge of the code base and application domain.

I also work on open source software and we haven't seen a meaningful uptick in good contributions due to AI over the last few years. So if there's some mythical productivity increase happening, I'm just not seeing it.

Skyline969@piefed.ca on 25 Feb 20:32 next collapse

Never have, never will. I’ll quit the industry before I use AI to do my job.

TheAgeOfSuperboredom@lemmy.ca on 25 Feb 20:36 next collapse

I recently had to tell one of my juniors to turn off his AI tools. His code was just all over the place and difficult to review. He still has a lot to learn, but I’ve already seen an improvement now that he actually has to be a bit thoughtful.

HetareKing@piefed.social on 25 Feb 20:52 next collapse

I don’t, and probably never will. A whole bunch of reasons:

  • The current state of affairs isn’t going to last forever; at some point the fact that nobody’s making money with this is going to catch up, a lot of companies providing these services are going to disappear and what remains will become prohibitively expensive, so it’s foolish to risk becoming dependent on them.
  • If I had to explain things in natural language all the time, I would become useless for the day before lunch. I’m a programmer, not a consultant.
  • I think even the IntelliSense in recent versions of Visual Studio is sometimes too smart for its own good, making careless mistakes more likely. AI would turn that up to 11.
  • I have little confidence that people, including myself, would actually review the generated code as thoroughly as they should.
  • Maintaining other people’s code takes a lot more effort than code you wrote yourself. It’s inevitable that you end up having to maintain something someone else wrote, but why would you want all the code you maintain to be that?
  • The use-cases that people generally agree upon AI is good at, like boilerplate and setting up projects, are all things that can be done quickly without relying on an inherently unreliable system.
  • Programming is entirely too fun to leave to computers. To begin with, most of your time isn’t even spent on writing code, I don’t really get the psychology of denying yourself the catharsis of writing the code yourself after coming up with a solution.
qupada@fedia.io on 25 Feb 22:00 collapse

You wrote this all a lot better than I could have, but to expand on 2) I have no desire whatsoever to have a "conversation" (nay, argument) with a machine to try and convince/coerce/deceive/brow-beat (delete as appropriate) it into maybe doing what I wanted.

I don't want to deal with this grotesque "tee hee, oopsie" personality that every company seems to have bestowed on these awful things when things go awry, I don't want its "suggestions". I code, computer does. End of transaction.

People can call me a luddite at this point and I'll wear that badge with pride. I'll still be here, understanding my data and processes and writing code to work with them, long after (as you say) you've been priced out of these tools.

aev_software@programming.dev on 25 Feb 21:11 next collapse

I let mine find example code of how some feature should be used. And it’s really sad how often they make something up out of thin air, and how little regret they show for wasting my time.

87Six@lemmy.zip on 25 Feb 21:18 next collapse

I do and I am ashamed of it

Michia66@programming.dev on 25 Feb 21:35 collapse

At leasrt youre honst about it dirty slopperrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrrr

87Six@lemmy.zip on 25 Feb 21:39 collapse

Very mature of you

Michia66@programming.dev on 25 Feb 21:40 collapse

Im not hte one using a AI to posot and code for himsdelf

litchralee@sh.itjust.works on 25 Feb 21:29 next collapse

Having spent much of my software engineering career training and mentoring interns, new-hires, and transfers from other departments, and having toiled with some of their truly inexplicable questions that reveal shaky technical foundations, I can understand why so-called AI would be appealing: inexhaustible, while commanding the full battery of information stores that I could throw at it.

And yet, the reason I don’t use AI is precisely because those very interns, new-hires, and transfers invariably become first-class engineers that I have no problem referring to as my equals. It is my observation that I’ve become better at training these folks up with every passing year, and that means that if I were to instead spend my time using AI, I would lose out on even more talented soon-to-be colleagues.

I have only so much time of my mortal coil remaining, and if the dichotomy is between utilizing inordinate energy, memory, and compute for AI, or sharing my knowledge and skills to even just 2 people per year for the rest of my career, I’ll happily choose the latter. In both circumstances, I will never own the product of their labor, and I don’t really care to. What matters to me is that value is being created, and I know there is value in bringing up new software engineers into this field. Whereas the value of AI pales in comparison, if it’s even a positive value at all.

If nothing else, the advent of AI has caused me to redouble my efforts, to level-up more engineers to the best of my ability. It is a human legacy that I can contribute to, and I intend to.

Michia66@programming.dev on 25 Feb 21:34 next collapse

Everyone uses AI whether thye admt ot it or not. Anyone claimibg not to us AI is lyinhg

pHr34kY@lemmy.world on 25 Feb 21:39 next collapse

My team just recently started using copilot for PR reviews.

So far I’ve found that 90% of what it raises is incorrect. For the stuff it actually finds, the code suggestion to fix it is almost always wrong. It will write 20 lines of code for something that’s a one-line fix.

It picked up on one reentry bug on a recursive function that I don’t think another dev would have spotted.

It’s definitely slowing me down. I hear about AI wasting dev’s time with bogus bug reports. It’s now integrated into my workflow.

I’ve already got SonarQube and linters which finds issues the moment I introduce them. They’re doing a much better job at maintaining code quality.

rimu@piefed.social on 25 Feb 21:42 next collapse

Please, continue to “use AI daily”. Rot your brain, see if I care.

If my competitors want to shoot themselves in the foot that’s fine by me, I won’t stop them.

lichtmetzger@discuss.tchncs.de on 25 Feb 21:50 next collapse

I have stopped using it, because the skill atrophy kicked in and I don’t want to turn into someone chatting with a bot every day.

To quote myself:

I work as a software developer and over the last months, I slipped into a habit of letting ChatGPT write more and more code for me. It’s just so easy to do! Write a function here, do some documentation there, do all of the boilerplate for me, set up some pre-commit hooks, …

Two weeks ago I deleted my OpenAI account and forced myself to write all code without LLMs, just as I did before. Because there is one very real problem of excessive AI useage in software development: Skill atrophy.

I was actively losing knowledge. Sometimes I had to look up the easiest things (like builtin Javascript functions) I was definitely able to work with off the top of my head just a year ago. I turned away from being an actual developer to someone chatting with a machine. I slowly lost the fun in coding, because I outsourced the problem solving aspects that gave me a dopamine boost to the AI. I basically became a glorified copypaster.

Jhex@lemmy.world on 25 Feb 21:51 next collapse

I don’t use it… every time I’ve tried it seems more trouble than help

It’s like working with a suck up newbie that can’t learn

witness_me@lemmy.ml on 25 Feb 22:26 next collapse

I don’t. But my boss is pressuring me to do so because his boss and above are asking for all engineers to use it. The most I use it for is the occasional bash scripting semantics. I prefer not to use it in my daily work.

piccolo@sh.itjust.works on 25 Feb 22:46 next collapse

I use copilot a lot to create powershell scripts. Dont really care about quality since they are short and often one off cases. Also i use it as another set of eyes to troubleshoot code. Its useful, but i wouldnt use it to vibe code a project.

xxce2AAb@feddit.dk on 25 Feb 22:54 next collapse

Yeah. I prefer not externalizing my ability to think.

Tharkys@lemmy.wtf on 25 Feb 22:57 next collapse

I have tried on multiple occasions. Unfortunately, it never seems to give me the answers I want. It’s usually more time efficient to figure it out myself. Maybe I am too old, but I don’t think AI is anywhere near ready usefulness.

sudoMakeUser@sh.itjust.works on 25 Feb 23:03 next collapse

I use it daily for my software development job.

Avicenna@programming.dev on 25 Feb 23:13 next collapse

More like a manual. Google has become really shitty for complex queries, LLMs can find relevant keywords, documents much realiably. Granted, if you are asking questions about niche libraries it hallucinates functions quite often so I never ask it to write full pieces of code but just use it more like a stepping stone.

I find it amusing how shamelessly it lies about its hallucinations though. When I point out that a certain function it makes up does not exist the answer is always sth of the form “Sorry you are right that function existed before version X / that function existed in some of the online documentation” etc lol. It is like a halluception. If you ask it to find some links regarding these old versions or documentations they also somehow don’t exist anymore.

thoughtfuldragon@lemmy.blahaj.zone on 25 Feb 23:14 next collapse

I don’t, it’s not better than simply thinking about things myself. There isn’t institutional pressure to use it and if there was I would simply lie and not use it.

lmr0x61@lemmy.ml on 25 Feb 23:21 next collapse

Never used it to write my code. Others have given great reasons, which resonate with me, but the biggest one for me is that I enjoy writing code and designing programs. Why would I outsource one of the things I love to do? It’s really that simple for me.

towerful@programming.dev on 25 Feb 23:25 next collapse

I use it for scripts or for esoteric error messages or problems I’m having in my dev environment.
I can’t be bothered to understand a specific error message that I’ve never seen before because of an update or whatever.
So getting it to explain errors to me is handy.
I always review the LLMs process and the resulting changes it suggests (including searching what it’s done if I don’t get it).
It’s essentially a context-aware search engine.

Actual coding and problem solving? I enjoy that.

tobz619@lemmy.world on 25 Feb 23:56 next collapse

I only use it when I’m learning something very new and very dense and that’s to stand up an example based on a context I’m interested in or already familiar.

It helps me identify parts of the docs to focus on more quickly.

Otherwise no, I’m getting better without tools

thenextguy@lemmy.world on 26 Feb 00:59 next collapse

I am simply not interested. I enjoy writing code. Writing prompts is another task entirely.

I imagine that one day I might ask an ai to teach me how something works, but not to write the code for me. Today, I sometimes have to slog through poorly written documentation, or off topic stack exchange posts to figure something out. It might be easier using an llm for that I guess.

I imagine that if I only cared about getting something working as fast as possible I might use one some day.

But today is not that day.

MrQuallzin@lemmy.world on 26 Feb 01:12 next collapse

I use it as an overconfident rubber duck to bounce ideas and solutions off of in my code, but I don’t let it write for me. I don’t want the skills I’ve practiced to atrophy

dmajorduckie@lemmy.blahaj.zone on 26 Feb 01:17 next collapse

I don’t and never will. I’m one of the only people at my ~450 person company that doesn’t use an LLM.

entwine@programming.dev on 26 Feb 01:45 next collapse

I don’t. Personally, I don’t believe that AI assisted coding has a future, and I don’t like the quality of what it produces. Either we achieve AGI or whatever the finance bros are hyping up this week, or we don’t. If we do, then AI can write code without having a human in the loop, so “vibe coding” is dead. If we don’t, then AI code stays where its at right now, which is garbage quality. After a few years vibe coding disasters, demand for human coding will increase, and my skills will be much more valuable than even before all this craziness. In that case, letting those skills atrophy today would be the wrong move.

And if I’m wrong? Well, if AI code generation makes it so anyone who doesn’t know how to code can do it… then surely I’ll be able to figure it out? My existing skills wouldn’t hurt.

darklamer@feddit.org on 26 Feb 02:25 collapse

The great prof. Edsger Dijkstra explained much better than I ever could myself why trying to program computers using a natural language is a truly poor idea, in this now classic essay of his from 1978, On the foolishness of “natural language programming”:

www.cs.utexas.edu/~EWD/…/EWD667.html