Hardware Stockholm Syndrome (programmingsimplicity.substack.com)
from floofloof@lemmy.ca to programming@programming.dev on 07 Oct 05:13
https://lemmy.ca/post/52947130

cross-posted from: lemmy.bestiver.se/post/661505

Comments

#programming

threaded - newest

webghost0101@sopuli.xyz on 07 Oct 06:59 next collapse

Sorry but even if this was written by a human, ai has ruined this kind of sentences for me:

“This is Hardware Stockholm Syndrome: we optimized the hardware for C, then pointed at the hardware and said “see, C is efficient!” We forgot we made it that way.”

Also, so-many-dashes. Even if the “human author” fact checked all the details, it reads like slop and i cant get trough it.

azolus@slrpnk.net on 07 Oct 08:24 next collapse

I hate that people now associate dashes with ai as I used to really like using them (distinguishing between regular, en- and em-dashes, using them to make structure clearer over regular commas).

HelloRoot@lemy.lol on 07 Oct 08:48 next collapse

keep doing it but add some fitting humour and a typo here and therw - and nobody will think you’re ai

DON’T LET EM TAKE AWAY OUR DASHES

webghost0101@sopuli.xyz on 07 Oct 11:17 next collapse

Don’t get me wrong, i like using dashes too.

I actually have a contract to sign which requires an em dash that changes the interpretation drastically. I am struggling to get the author to realise its importance because i received 3 updated versions that did not include it. And this is after i replied with a self-fixed document the first time.

But this article really does use them a lot, and thats not the only tell, just a more obvious one.

I dont know how accurate zerogpt is but i gave it the full text and they returned 100% ai writen, not even a mix.

frezik@lemmy.blahaj.zone on 07 Oct 12:26 collapse

Same. I’ve actively avoided using them because LLMs ruined it.

bitcrafter@programming.dev on 07 Oct 13:07 next collapse

Personally, I like the idea of being thought of as a computer, as it suits my personality.

floofloof@lemmy.ca on 07 Oct 14:32 collapse

I keep using them, but the prevalence of AI style has made me more alert to when I’m overusing them.

BananaIsABerry@lemmy.zip on 07 Oct 11:56 next collapse

You really gotta stop ruining things for yourself by caring so much about something so inconsequential.

webghost0101@sopuli.xyz on 07 Oct 12:37 collapse

A piece of writing being thoughtfully put together is far from inconsequential for me.

I use a premium tier ai myself and am not against using it for assistance, it can craft a decent snippet (that still needs multiple manual edits) But not at all a full coherent text that reads efficiently.

Its applies structure without understanding the goal of the text resulting in a paragraph salad.

It simultaneously treats the reader like a toddler with oversimplified metaphors while also overcomplicating things for no other apparent reason than filling a word quota.

Above article is twice the length it needs to be. Its lazy, lacks actual understanding and feels “sloppy” in the original meaning of the word.

Having read more of the text i feel my original comment was way too forgiving. Even the opener does not make sense if you try and digest it. Silicon. It even includes misinformation, stack management existed before C was a thing.

BananaIsABerry@lemmy.zip on 07 Oct 13:05 collapse

All I’m saying is that the suspected use of AI shouldn’t be the reason you don’t like it. Instead, dislike it because of all the points you made about the article.

I think it’s safe to argue that most news articles are not thoughtfully put together, regardless of the use of AI. Bad news articles existed before AI and will continue to exist long after.

webghost0101@sopuli.xyz on 07 Oct 13:22 collapse

Thats fair,

In irony i probably could have worded my criticism better myself.

Its not because its ai that i don’t like it but rather because it has all the sloppy patterns i started to recognize that are prevalent in ai.

Some of those become increasingly jarring but only because i pay a subjective amount of attention to them. Bad human writers have an advantage in that their bad writing structure is still more unique to only their own writings.

Vincent@feddit.nl on 07 Oct 14:16 collapse

Wait, what’s wrong (or AI) about that quoted sentence?

webghost0101@sopuli.xyz on 07 Oct 17:26 collapse

The pattern are “we have this. we did this, now we believe this that is wrong” and “says what it is not, says what it is instead”

Sometimes a combination of the two

On premise there is nothing wrong with such sentences but llms tend to heavily overuse it and it becomes very formulatic.

Ask an llm to explain any concept and you are bound to find examples of it. Tell it how it made a logical error and your almost guaranteed to see an example of it.

Over the entire text i note about 10 variations of that pattern.

verstra@programming.dev on 07 Oct 16:47 collapse

Ok, good point, most languages I know use “C-style sequential function-calling” paradigm. Is there a specific idea that you have for a language that would better utilize our CPUs?

Notation that treats asynchronous message-passing as fundamental rather than exceptional.

I’m pretty sure there exists at least one research paper about notation for the actor pattern.

You explain pretty well why you don’t think C is a good fit for hardware we have today, but that warrants a proposal for something better. Because I for sure don’t want to debug programs where everything is happening in parallel (as it does in pong).