The Grimdark Future that is Generative AI (blog.magosomni.com)
from moto@programming.dev to programming@programming.dev on 19 Feb 01:45
https://programming.dev/post/45989253

Long time lurker, first time poster. Don’t know what it’s been with my job but spurred this rant.

#programming

threaded - newest

footprint@lemmy.world on 19 Feb 03:11 next collapse

The pitch is, for those that have never coded it’s a gateway into a skill they never wanted to learn

I’m simply whelmed by it

You never hear someone ask “how can a socket wrench help us with this problem”, why do we do it with LLMs?

Spitting bars.

I’m working as a TLM for a company that’s only recently become AI maximalist… I see myself in your writing 🫠

moto@programming.dev on 19 Feb 03:26 collapse

Thanks! It’s both reassuring and sad that I’m not the only one going through this at work. Shit’s crazy right now.

Gaetano@programming.dev on 19 Feb 05:38 collapse

Thanks for writing it. I’ve needed to read someone who has had similar experiences and observations. This delivered.

andyburke@fedia.io on 19 Feb 06:11 next collapse

Amen.

Jayjader@jlai.lu on 19 Feb 09:23 next collapse

So which is it? Are developers 55% more productive, or are they losing 20% of their time to inefficiencies and burning out at record rates?

The answer: executives are measuring—and reporting—what makes their stock price rise, not what’s actually happening on the ground.

Or if you want to get slightly more conspiratorial: the execs are all buying shares in OpenAI, Nvidia, and the like - so now they’re more interested in ordering people to use LLM tools so that these stocks rise in price, even if it means sabotaging their own company.

resipsaloquitur@lemmy.world on 19 Feb 13:25 next collapse

Aside from all the financial and anti-social considerations, I think AI is “vindication” for all the middle and upper managers who felt engineers were either lazy or precious about their work. “See, I shouted at the computer and it did what I wanted in seconds instead of months! Why can’t you do that, nerds?”

Unfortunately, voicing concerns about quality will only reinforce this dynamic.

And while I agree broadly with what you wrote… AI writing unit tests? God help us all.

P.S. neologism alert: morged.

moto@programming.dev on 19 Feb 14:55 collapse

“See, I shouted at the computer and it did what I wanted in seconds instead of months! Why can’t you do that, nerds?”

It also doesn’t say “no” like those nerds keep doing

AI writing unit tests? God help us all.

Haha, like I said in the foot note. If you don’t like it, good! “This is not a good use case, let’s scrap it and move on” is a perfect thing to say here.

It’s the only thing I could think to try it with that I could easily audit results for. It mostly works. But there are a few things it does that causes me to scrap results

  • It loves to mock dependencies, even idempotent ones that don’t connect to 3rd party dependencies
  • The verbiage for the test names doesn’t speak towards requirements and is more like “it works” and often includes the word “should”.
  • It sometimes likes to mock the Subject under Test, which is a huge no no

Often though I can keep some of it and just scrap the bad parts. And if it causes me problems I’m happy to quit it. It’s not revolutionary. I’m just whelmed.

weimaraner_of_doom@piefed.social on 19 Feb 15:59 collapse

The average corporate executive has no idea just how much of their organizations resources are dedicated to “churn.” By “churn” I mean the flurry of activity that provides the illusion of productivity while actually producing very little.

I worked for one company that liked to move dev teams around to different platforms to [theoretically] increase productivity around major feature releases.

Pretty much anyone who has been in software engineering for more than a week knows that reassigning devs to a project that they’re unfamiliar with and only plan to work on for a short time will actually slow development down, not speed it up. To make matters worse, some of these teams had very poor practices. I’m trying to be charitable with my words.

The immediate result was that the number of bugs and defects skyrocketed, dramatically increasing the workload for QA and the core dev team. Management’s response was to pile a bunch of unnecessary requirements that had to be met before a PR could be merged. This failed to produce a meaningful reduction in defects while bringing development almost to a halt.

One of the directions I gave to the core team was that when they worked a defect ticket, in addition to fixing the defect, they needed to perform a root cause analysis, link the defect ticket in Jira to the ticket that introduced the defect, and write the root cause analysis in the comments.

If anyone from the product team had bothered to review the defect tickets, they would have found that a small number of individuals produced 90% of the defects. Instead, they carried on with the unfortunately common notion that software engineers are just overpaid monkeys and that pretty much anyone can bang out web apps.

The current views on “AI” are often a continuation along that line of thought. The value of Institutional knowledge is basically impossible to quantify so it gets ignored altogether. I honestly don’t know what impact this will all have on the average organization. I do think that if someone has staked their entire organizations future on an LLM, they’re in for a bad time.