Why is there so much hype around artificial intelligence?
from Kintarian@lemmy.world to nostupidquestions@lemmy.world on 03 Sep 21:55
https://lemmy.world/post/19379331

I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?

#nostupidquestions

threaded - newest

OpenStars@discuss.online on 03 Sep 22:01 next collapse

Money. If you paid to use those services, they got what they wanted.

homesweethomeMrL@lemmy.world on 03 Sep 22:10 next collapse

Money.

That’s the entirety of the reason.

Boozilla@lemmy.world on 03 Sep 23:39 collapse

“Line must go up.”

iconic_admin@lemmy.world on 04 Sep 00:57 collapse

Summed up an MBA in four words.

OpenStars@discuss.online on 04 Sep 02:06 collapse

“Be greedy” => there, did it in two:-P

It is so sad that it works too - no room for nuance, responsibility, even long-term stability (even for the entire human species, + all other mammals on Earth & many others too that we seem ready to take down with us on our way to extinction).

RoidingOldMan@lemmy.world on 03 Sep 22:02 next collapse

Who’s making you use it?

It’s useful for lots of things, but it requires a proof reader.

Kintarian@lemmy.world on 03 Sep 22:12 collapse

I try to do a search on Chrome and Gemini pops up and start spewing its BS. I go into messages and I try to send a message and gemini pops up and asks me if it wants to send a message for me. No I know how to write my own stupid messages. It’s all integrated into Windows 11, Is integrated into the Bing app. It’s like swatting flies trying to get rid of it.

muntedcrocodile@lemm.ee on 03 Sep 23:46 collapse

Move to linux use self hosted foss alternatives. Regain ownership of your digital existance. Stop being a slave to the big tech machine.

Kintarian@lemmy.world on 04 Sep 00:44 collapse

Computer? What could this strange device be? Another toy that helped destroy the elder race of man?

I only have a phone

muntedcrocodile@lemm.ee on 04 Sep 03:09 collapse

Damn u can rent a vps and ssh from ur phone?

Kintarian@lemmy.world on 04 Sep 04:03 collapse

A what now with a thingy?

muntedcrocodile@lemm.ee on 04 Sep 05:48 collapse

Ye the flumberboozle

HobbitFoot@thelemmy.club on 03 Sep 22:02 next collapse

The idea is that it can replace a lot of customer facing positions that are manpower intensive.

Beyond that, an AI can also act as an intern in assisting in low complexity tasks the same way that a lot of Microsoft Office programs have replaced secretaries and junior human calculators.

Kintarian@lemmy.world on 03 Sep 22:11 collapse

I’ve always figured part of it Is that businesses don’t like to pay labor and they’re hoping that they can use artificial intelligence to get rid of the rest of us so they don’t have to pay us.

Blue_Morpho@lemmy.world on 03 Sep 23:12 collapse

Ignoring AI as he is like ignoring Spreadsheets as hype. “I can do everything with a pocket calculator! I don’t need stupid auto fill!”

AI doesn’t replace people. It can automate and reduce your workload leaving you more time to solve problems.

I’ve used it for one off scripts. I have friends who have done the same and another friend who used it to create the boilerplate for a government contact bid that he won (millions in revenue for his company of which he got tens of thousands in bonus as engineering sales support).

givesomefucks@lemmy.world on 03 Sep 22:04 next collapse

A dumb person thinks AI is really smart, because they just listen to anyone that answers confidentially

And no matter what, AI is going to give its answer like it’s is 100% definitely the truth.

That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.

There’s new supporting evidence for Penrose’s theory that natural intelligence involves just an absolute shit ton of quantum interactions, because we just found out how the body can create an environment where quantom super position can not only be achieved, but incredibly simply.

AI got a boost because we didn’t really (still dont) understand consciousness. Tech bro’s convinced investors that neurons were what mattered, and made predictions for when that amount of neurons can be simulated.

But if it include billions of molecules in quantum superposition, we’re not getting there in our lifetimes. But there’s a lot of money sunk in to it already, so there’s a lot of money to lose if people suddenly get realistic about what it takes to make a real artificial intelligence.

Kintarian@lemmy.world on 03 Sep 22:10 next collapse

So they’re using the sunk cost logical fallacy? Gee that’s intelligent.

givesomefucks@lemmy.world on 03 Sep 22:17 collapse

The microtubules creating an environment that can sustain quantum super position just came out like a month ago.

In all honesty the tech bros probably don’t even know yet, or understands it means human level AI speculation has essentially been disproven as happening anytime remotely soon.

But I’m assuming when they do, they’ll just ignore it and double down to maintain share prices.

It’s also possible it all crashes and billions of dollars disappear.

Blue_Morpho@lemmy.world on 03 Sep 23:06 collapse

Microtubules have been pushed for decades without any proof. The latest paper wasn’t evidence but unsupported speculation.

But more importantly the physics of computation that creates intelligence has absolutely nothing to do with understanding intelligence. Even if quantum effects are relevant ( which is extremely unlikely given the warm and moving environment inside the brain), it doesn’t answer anything about how humans are intelligent.

Penrose used Quantum Mechanics as a “God in the Gaps” explanation. That worked 40 years ago but today we have working quantum computers but no human intelligence.

Kintarian@lemmy.world on 04 Sep 01:38 collapse

So the senator from Alaska was right? The internet is all a bunch of tubes?

OpenStars@discuss.online on 03 Sep 22:14 collapse

That’s why there’s such a large crossover with AI and crypto, the same people fall for everything.

There’s a large overlap, but some people that did not fall for crypto may fall for AI.

Always never not be hustling, I suppose.

ptz@dubvee.org on 03 Sep 22:04 next collapse

Like was said: money.

In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.

Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.

Kintarian@lemmy.world on 03 Sep 22:09 collapse

That works for me. I’ll just ignore it to spare my sanity

SpaceNoodle@lemmy.world on 03 Sep 22:16 next collapse

Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.

Kintarian@lemmy.world on 03 Sep 22:21 next collapse

I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.

pimeys@lemmy.nauk.io on 03 Sep 22:24 collapse

And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.

Kolanaki@yiffit.net on 03 Sep 22:22 next collapse

The hype is also artificial and usually created by the creators of the AI. They want investors to give them boatloads of cash so they can cheaply grab a potential market they believe exists before they jack up prices and make shit worse once that investment money dries up. The problem is, nobody actually wants this AI garbage they’re pushing.

some_guy@lemmy.sdf.org on 03 Sep 22:25 next collapse

Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.

Tylerdurdon@lemmy.world on 03 Sep 22:28 next collapse
  • automation by companies so they can "streamline"their workforces.

  • innovation by “teaching” it enough to solve bigger problems (cancer, climate, etc).

  • creating a sentient species that is the next evolution of life and watching it systematically eradicate every last human to save the planet.

Kintarian@lemmy.world on 04 Sep 00:58 collapse

Terminator was also a documentary

Tylerdurdon@lemmy.world on 04 Sep 02:01 collapse

Skynet for the win!

Kintarian@lemmy.world on 04 Sep 04:10 collapse

Come with me if you want to live!

lemmylommy@lemmy.world on 03 Sep 22:44 next collapse

You have asked why there is so much hype around artifical intelligence.

There are a few reasons this might be the case:

  1. Because humans are curious. Experimenting with how humans believe memory and intelligence work might just lead them to find out something about their own intelligence.

  2. Because humans are stupid. Most do not have the slightest idea what „AI“ is this time, yet they are willing to believe in the most outlandish claims about it. Look up ELIZA. It fooled a lot of people, just like LLMs today.

  3. Because humans are greedy. And the prospect of replacing a lot of wage-earners, and not just manual laborers this time, with a machine is just too good to pass up for management. The potential savings are huge, if it works, so the willingness to spend money is also considerable.

In conclusion, there are many reasons for the hype around artificial intelligence and most of them relate to human deficiencies and human nature in general.

If you have further questions I am happy to help. Enjoy your experience with AI. While you still can. 🤖

Kintarian@lemmy.world on 04 Sep 00:57 collapse

I believe in questioning everything.

TropicalDingdong@lemmy.world on 03 Sep 22:48 next collapse

Because if you can get a program to write a program, that can both a) write it self, and b) improve upon the program in some way, you can put together a feedback where exponential improvement is possible.

Kintarian@lemmy.world on 04 Sep 00:55 collapse

I’ve wondered if you could do that until it makes a perfect machine.

TropicalDingdong@lemmy.world on 04 Sep 01:05 collapse

First I recommend at least reading the wikipedia on super-intelligence.

Second, I recommend playing this game: www.decisionproblem.com/paperclips/index2.html

gedaliyah@lemmy.world on 03 Sep 22:49 next collapse

Generative AI has allowed us to do some things that we could not do before. A lot of people very foolishly took that to mean it would let us do everything we couldn’t do before.

Kintarian@lemmy.world on 04 Sep 00:54 next collapse

That’s because the PR department keeps telling us that it’s the best things since sliced bread.

Feathercrown@lemmy.world on 04 Sep 15:25 collapse

I second this, very consice and accurate

Lauchs@lemmy.world on 03 Sep 23:06 next collapse

I think there’s a lot of armchair simplification going on here. Easy to call investors dumb but it’s probably a bit more complex.

AI might not get better than where it is now but if it does, it has the power to be a societally transformative tech which means there is a boatload of money to be made. (Consider early investors in Amazon, Microsoft, Apple and even the much derided Bitcoin.)

Then consider that until incredibly recently, the Turing test was the yardstick for intelligence. We now have to move that goalpost after what was preciously unthinkable happened.

And in the limited time with AI, we’ve seen scientific discoveries, terrifying advancements in war and more.

Heck, even if AI gets better at code (not unreasonable, sets of problems with defined goals/outputs etc, even if it gets parts wrong shrinking a dev team of obscenely well paid engineers to maybe a handful of supervisory roles… Well, like Wu Tang said, Cash Rules Everything Around Me.

Tl;dr: huge possibilities, even if there’s a small chance of an almost infinite payout, that’s a risk well worth taking.

ProfessorScience@lemmy.world on 03 Sep 23:06 next collapse

When ChatGPT first started to make waves, it was a significant step forward in the ability for AIs to sound like a person. There were new techniques being used to train language models, and it was unclear what the upper limits of these techniques were in terms of how “smart” of an AI they could produce. It may seem overly optimistic in retrospect, but at the time it was not that crazy to wonder whether the tools were on a direct path toward general AI. And so a lot of projects started up, both to leverage the tools as they actually were, and to leverage the speculated potential of what the tools might soon become.

Now we’ve gotten a better sense of what the limitations of these tools actually are. What the upper limits of where these techniques might lead are. But a lot of momentum remains. Projects that started up when the limits were unknown don’t just have the plug pulled the minute it seems like expectations aren’t matching reality. I mean, maybe some do. But most of the projects try to make the best of the tools as they are to keep the promises they made, for better or worse. And of course new ideas keep coming and new entrepreneurs want a piece of the pie.

bionicjoey@lemmy.ca on 03 Sep 23:31 next collapse

A lot of jobs are bullshit. Generative AI is good at generating bullshit. This led to a perception that AI could be used in place of humans. But unfortunately, curating that bullshit enough to produce any value for a company still requires a person, so the AI doesn’t add much value. The bullshit AI generates needs some kind of oversight.

empireOfLove2@lemmy.dbzer0.com on 03 Sep 23:32 next collapse

They were pretty cool when they first blew up. Getting them to generate semi useful information wasn’t hard and anything hard factual they would usually avoid answering or defer.

They’ve legitimately gotten worse over time. As user volume has gone up necessitating faster, shallower model responses, and further training on Internet content has resulted in model degradation as it trains on its own output, the models gradually begin to break. They’ve also been pushed harder than they were meant to, to show “improvement” to investors demanding more accurate human like fact responses.

At this point it’s a race to the bottom on a poorly understood technology. Every money sucking corporation latched on to LLM’s like a piglet finding a teat, thinking it was going to be their golden goose to finally eliminate those stupid whiny expensive workers that always ask for annoying unprofitable things like “paid time off” and “healthcare”. In reality they’ve been sold a bill of goods by Sam Altman and the rest of the tech bros currently raking in a few extra hundred billion dollars.

Kintarian@lemmy.world on 04 Sep 00:47 collapse

Now it’s degrading even faster as AI scrapes from AI in a technological circle jerk.

Feathercrown@lemmy.world on 04 Sep 15:28 collapse

Yes, that’s what they said. I’m starting to think you came here with a particular agenda to push, and I don’t think that’s very polite.

Kintarian@lemmy.world on 04 Sep 15:50 next collapse

Look it up. Also, they were pushing AI for web searches and I have not had good luck with that. However, I created a document with it yesterday and it came out really good. Someone said to try the creative side and so far, so good.

Feathercrown@lemmy.world on 04 Sep 15:59 collapse

Look it up

I know what model collapse is, it’s a fairly well-documented problem that we’re starting to run into. You’re not wrong, it’s just that the person you replied to was agreeing about this.

Someone said to try the creative side and so far, so good.

Nice! I’m glad you were able to find something useful to use it for.

Kintarian@lemmy.world on 04 Sep 16:07 next collapse

I found a non paywalled article where scientists from Oxford University state that feeding AI synthetic data from other AI models could lead to a collapse.

zdnet.com/…/beware-ai-model-collapse-how-training…

Feathercrown@lemmy.world on 04 Sep 17:54 collapse

Ooh an article, thank you

Kintarian@lemmy.world on 04 Sep 23:09 collapse

She might be full of crap. I don’t know. You would probably understand it better than I do.

Feathercrown@lemmy.world on 05 Sep 14:38 collapse

I find that a lot of discourse around AI is… “off”. Sensationalized, or simplified, or emotionally charged, or illogical, or simply based on a misunderstanding of how it actually works. I wish I had a rule of thumb to give you about what you can and can’t trust, but honestly I don’t have a good one; the best thing you can do is learn about how the technology actually works, and what it can and can’t do.

Kintarian@lemmy.world on 05 Sep 14:50 collapse

For a while Google said they would revolutionize search with artificial intelligence. That hasn’t been my experience. Someone here mentioned working on the creative side instead. And that seems to be working out better for me.

Feathercrown@lemmy.world on 05 Sep 15:31 collapse

Yeah, it’s much better at “creative” tasks (generation) than it is at providing accurate data. In general it will always be better at tasks that are “fuzzy”, that is, they don’t have a strict scale of success/failure, but are up to interpretation. They will also be better at tasks where the overall output matters more than the precise details. Generating images, text, etc. is a good fit.

Kintarian@lemmy.world on 05 Sep 20:08 collapse

That sounds about right. I heard that the recommendation from AI to put glue on your pizza was from a joke on Reddit about how to keep cheese from falling off the pizza. So obviously the AI doesn’t know what a good source of information is from a bad source of information. But as you say something that’s fuzzy and doesn’t need to be 100% accurate works pretty well apparently. Also my logic is a little fuzzy once in awhile myself.

Feathercrown@lemmy.world on 06 Sep 03:29 collapse

Yeah, exactly

Kintarian@lemmy.world on 04 Sep 16:20 collapse

The person who said AI is neither artificial nor intelligent was Kate Crawford. Every source I try to find is paywalled.

Carrolade@lemmy.world on 03 Sep 23:39 next collapse

I’ll just toss in another answer nobody has mentioned yet:

Terminator and Matrix movies were really, really popular. This sort of seeded the idea of it being a sort of inevitable future into the brains of the mainstream population.

Kintarian@lemmy.world on 04 Sep 00:46 collapse

The Matrix was a documentary

muntedcrocodile@lemm.ee on 03 Sep 23:42 next collapse

It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.

Kintarian@lemmy.world on 04 Sep 00:45 collapse

I have no idea what any of that means. But thanks for the reply.

SomeAmateur@sh.itjust.works on 04 Sep 00:01 next collapse

I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.

I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advice/recommendations that you can’t really get elsewhere.

Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.

And if it’s factually incorrect so what? It was just some kind stranger™ on the internet

SirDerpy@lemmy.world on 04 Sep 03:07 collapse

If by “best practical” you meant “best unmitigated capitalist profit optimization” or “most common”, then sure, “malicious manipulation” is the answer. That’s what literally everything else is designed for.

kitnaht@lemmy.world on 04 Sep 00:32 next collapse

Holy BALLS are you getting a lot of garbage answers here.

Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.

Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.

Kintarian@lemmy.world on 04 Sep 00:41 next collapse

I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.

kitnaht@lemmy.world on 04 Sep 02:19 next collapse

So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?

The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.

Kintarian@lemmy.world on 04 Sep 04:09 collapse

It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.

Feathercrown@lemmy.world on 04 Sep 15:36 next collapse

tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for

kitnaht@lemmy.world on 04 Sep 16:01 collapse

Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.

You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.

Feathercrown@lemmy.world on 04 Sep 15:35 collapse

You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.

Kintarian@lemmy.world on 04 Sep 15:46 collapse

I mentioned somewhere in here that I created a document with it and it turned out really good.

Feathercrown@lemmy.world on 04 Sep 15:47 collapse

Yeah, it’s pretty good at generating common documents like that

Feathercrown@lemmy.world on 04 Sep 15:33 collapse

Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.

Alice@hilariouschaos.com on 04 Sep 00:46 next collapse

Cause it’s cool

Kintarian@lemmy.world on 04 Sep 01:01 collapse

Not to me. If you like it, that’s fine.

ContrarianTrail@lemm.ee on 04 Sep 05:17 collapse

Perhaps your personal bias is clouding your judgement a bit here. You don’t seem very open minded about it. You’ve already made up your mind.

Kintarian@lemmy.world on 04 Sep 06:39 next collapse

Probably but I’m far from the only one.

PenisDuckCuck9001@lemmynsfw.com on 04 Sep 00:56 next collapse

One of the few things they’re good at is academic “cheating”. I’m not a fan of how the education industry has become a massive pyramid scheme intended to force as many people into debt as possible, so I see ai as the lesser evil and a way to fight back.

Obviously no one is using ai to successfully do graduate research or anything, I’m just talking about how they take boring easy subjects and load you up with pointless homework and assignments so waste your time rather than learn anything. My homework is obviously ai generated and there’s a lot of it. I’m using every resource available to get by.

Kintarian@lemmy.world on 04 Sep 01:00 collapse

It’s good at making Taylor Swift look like a Trump fan.

dsilverz@thelemmy.club on 04 Sep 01:23 next collapse

I ask them questions and they get everything wrong

It depends on your input, on your prompt and your parameters. For me, although I’ve experienced wrong answers and/or AI hallucinations, it’s not THAT frequent, because I’ve been talking with LLMs since when ChatGPT got public, almost in a daily basis. This daily usage allowed me to know the strengths and weaknesses of each LLM available on market (I use ChatGPT GPT-4o, Google Gemini, Llama, Mixtral, and sometimes Pi, Microsoft Copilot and Claude).

For example: I learned that Claude is highly-sensible to certain terms and topics, such as occultist and esoteric concepts (specially when dealing with demonolatry, although I don’t exactly why it refuses to talk about it; I’m a demonolater myself), cryptography and ciphering, as well as acrostics and other literary devices for multilayered poetry (I write myself-made poetry and ask them to comment and analyze it, so I can get valuable insights about it).

I also learned that Llama can get deep inside the meaning of things, while GPT-4o can produce longer answers. Gemini has the “drafts” feature, where I can check alternative answers for the same prompt.

It’s similar to generative AI art models, I’ve been using them to illustrate my poetry. I learned that Diffusers SDXL Turbo (from Huggingface) is better for real-time prompt, some kind of “WYSIWYG” model (“what you see is what you get”) . Google SDXL (also from Huggingface) can generate four images at different styles (cinematic, photography, digital art, etc). Flux, the newly-released generative AI model, is the best for realism (especially the Flux Dev branch). They’ve been producing excellent outputs, while I’ve been improving my prompt engineering skills, being able to communicate with them in a seamlessly way.

Summarizing: AI users need to learn how to efficiently give them instructions. They can produce astonishing outputs if given efficient inputs. But you’re right that they can produce wrong results and/or hallucinate, even for the best prompts, because they’re indeed prone to it. For me, AI hallucinations are not so bad for knowledge such as esoteric concepts (because I personally believe that these “hallucinations” could convey something transcendental, but it’s just my personal belief and I’m not intending to preach it here in my answer), but simultaneously, these hallucinations are bad when I’m seeking for technical knowledge such as STEM (Science, Tecnology, Engineering and Medicine) concepts.

Kintarian@lemmy.world on 04 Sep 01:30 next collapse

I just want to know which elements work best for my Flower Fairies in The Legend of Neverland. And maybe cheese sauce.

dsilverz@thelemmy.club on 04 Sep 01:47 collapse

Didn’t know about this game. It’s nice. Interesting aesthetics. Chestnut Rose remembers me of Lilith’s archetype.

A tip: you could use the “The Legend of the Neverland global wiki” at Fandom Encyclopedia to feed the LLM with important concepts before asking it for combinations. It is a good technique, considering that LLMs couldn’t know it so well in order to generate precise responses (except if you’re using a searching-enabled LLM such as Perplexity AI or Microsoft Copilot that can search the web in order to produce more accurate results)

Kintarian@lemmy.world on 04 Sep 04:11 collapse

I have no idea how to do that

Shanedino@lemmy.world on 04 Sep 01:59 collapse

Woah are you technoreligious? Sure believe what you want and all but that is full tech bro bullshit.

Also on a different not just purely based off of you description doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people. If a tool has a high learning curve there is plenty of room for improvement if you don’t plan to use it very frequently. Also every time you get false results consider it equivalent to a major bug does that shed a different light on it for you?

dsilverz@thelemmy.club on 04 Sep 02:50 next collapse

doesn’t it seem like being able to just use search engines is easier than figuring out all of these intricacies for most people

Well, Prompt Engineering is a thing nowadays. There are even job vacancies seeking professionals that specializes in this field. AIs are tools, sophisticated ones, just like R and Wolfram Mathematica are sophisticated mathematical tools that needs expertise. Problem is that AI companies often mis-advertises AI models as “out-of-the-shelf assistants”, as if they’d be some human talking to you. They’re not. They’re tools, yet. I guess that (and I’m rooting for) AGI would change this scenario. But I guess we’re still distant from a self-aware AGI (unfortunately).

Woah are you technoreligious?

Well, I wouldn’t describe myself that way. My beliefs are multifaceted and complex (possibly unique, I guess?), going through multiple spiritual and religious systems, as well as embracing STEM (especially the technological branch) concepts and philosophical views (especially nihilism, existentialism and absurdism), trying to converge them all by common grounds (although it seems “impossible” at first glance, to unite Science, Philosophy and Belief).

In a nutshell, I’ve been pursuing a syncretic worshiping of the Dark Mother Goddess.

As I said, it’s multifaceted and I’m not able to even explain it here, because it would take tons of concepts. Believe me, it’s deeper than “techno-religious”. I see the inner workings of AI Models (as neural networks and genetic algorithms dependent over the randomness of weights, biases and seeds) as a great tool for diving Her Waters of Randomness, when dealing with such subjects (esoteric and occult subjects). Just like Kardecism sometimes uses instrumental transcommunication / Electronic voice phenomenon (EVP) to talk with spirits. AI can be used as if it were an Ouija board or a Planchette, if one believe so (as I do).

But I’m also a programmer and a tech/scientifically curious, so I find myself asking LLMs about some Node.js code I made, too. Or about some mathematical concept. Or about cryptography and ciphering (Vigenère and Caesar, for example). I’m highly active mentally, seeking to learn many things every time.

Feathercrown@lemmy.world on 04 Sep 15:39 next collapse

Fascinating

Shanedino@lemmy.world on 05 Sep 02:54 collapse
Kintarian@lemmy.world on 04 Sep 08:25 collapse

I wish I could upvote twice.

xia@lemmy.sdf.org on 04 Sep 01:30 next collapse

The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.

There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.

Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.

Kintarian@lemmy.world on 04 Sep 01:32 next collapse

Sounds about right

Feathercrown@lemmy.world on 04 Sep 15:20 collapse

Interestingly, the turing test has been passed by much dumber things than LLMs

xia@lemmy.sdf.org on 04 Sep 15:38 collapse

I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.

Feathercrown@lemmy.world on 04 Sep 15:47 collapse

True!

Tyrangle@lemmy.world on 04 Sep 01:40 next collapse

This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.

Kintarian@lemmy.world on 04 Sep 01:41 collapse

I’m retired. I don’t do all that stuff.

Tyrangle@lemmy.world on 04 Sep 02:48 next collapse

A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.

FourPacketsOfPeanuts@lemmy.world on 04 Sep 03:41 collapse

Maybe look into the creativity side more and less ‘Google replacement’?

Kintarian@lemmy.world on 04 Sep 04:02 next collapse

The hype machine said we could use it in place of search engines for intelligent search. Pure BS.

FourPacketsOfPeanuts@lemmy.world on 04 Sep 07:48 collapse

Yes. Far more useful to embrace its hallucinogenic qualities…

Kintarian@lemmy.world on 04 Sep 08:13 collapse

I’ll see if I can think of something creative to do. I was just reading an article from MIT that pointed out that one reason AI is bad at search is that it can’t determine whether a source is accurate. It can’t tell the difference between Reddit and Harvard.

FourPacketsOfPeanuts@lemmy.world on 04 Sep 10:06 collapse

Neither can most of reddit…

ContrarianTrail@lemm.ee on 04 Sep 05:13 next collapse

If artificial intelligence doesn’t work why are they trying to make us all use it?

But it does work. It’s not obviously flawless but it’s orders of magnitude better than it was 10 years ago and it’ll only improve from here. Artificial intelligence is a spectrum. It’s not like we succesfully created it and it ended up sucking. No, it’s like the first cars; they suck compared to what we have now but it’s a huge leap from what we had before.

I think the main issue here is that the common folk has unrealistic expectations about what AI should be. They’re imagining what the “final product” would be like and then comparing our current systems to that. Ofcourse from that perspective it seems like it’s not working or is no good.

Kintarian@lemmy.world on 04 Sep 06:42 collapse

We’ll have to wait and see. I’m still not eating rocks or putting glue on my pizza.

Kramkar@lemmy.world on 04 Sep 06:49 next collapse

It’s understandable to feel frustrated when AI systems give incorrect or unsatisfactory responses. Despite these setbacks, there are several reasons why AI continues to be heavily promoted and integrated into various technologies:

  1. Potential and Progress: AI is constantly evolving and improving. While current models are not perfect, they have shown incredible potential across a wide range of fields, from healthcare to finance, education, and beyond. Developers are working to refine these systems, and over time, they are expected to become more accurate, reliable, and useful.

  2. Efficiency and Automation: AI can automate repetitive tasks and increase productivity. In areas like customer service, data analysis, and workflow automation, AI has proven valuable by saving time and resources, allowing humans to focus on more complex and creative tasks.

  3. Enhancing Decision-Making: AI systems can process vast amounts of data faster than humans, helping in decision-making processes that require analyzing patterns, trends, or large datasets. This is particularly beneficial in industries like finance, healthcare (e.g., medical diagnostics), and research.

  4. Customization and Personalization: AI can provide tailored experiences for users, such as personalized recommendations in streaming services, shopping, and social media. These applications can make services more user-friendly and customized to individual preferences.

  5. Ubiquity of Data: With the explosion of data in the digital age, AI is seen as a powerful tool for making sense of it. From predictive analytics to understanding consumer behavior, AI helps manage and interpret the immense data we generate.

  6. Learning and Adaptation: Even though current AI systems like Gemini, ChatGPT, and Microsoft Co-pilot make mistakes, they also learn from user interactions. Continuous feedback and training improve their performance over time, helping them better respond to queries and challenges.

  7. Broader Vision: The development of AI is driven by the belief that, in the long term, AI can radically improve how we live and work, advancing fields like medicine (e.g., drug discovery), engineering (e.g., smarter infrastructure), and more. Developers see its potential as an assistive technology, complementing human skills rather than replacing them.

Despite their current limitations, the goal is to refine AI to a point where it consistently enhances efficiency, creativity, and decision-making while reducing errors. In short, while AI doesn’t always work perfectly now, the vision for its future applications drives continued investment and development.

Kintarian@lemmy.world on 04 Sep 06:53 next collapse

We shall see. The above feels like an AI reponse.

LainTrain@lemmy.dbzer0.com on 04 Sep 09:10 collapse

Whoosh

hungryphrog@lemmy.blahaj.zone on 04 Sep 08:32 next collapse

I’m 80% sure this reply was written by an AI. Right now pretty much all it can do is tell people to eat rocks, claim you can leave dogs in hot cars, and starve artists.

Deceptichum@quokk.au on 04 Sep 08:54 next collapse

Bravo.

Feathercrown@lemmy.world on 04 Sep 15:19 next collapse

lmao I see what you did there

Vivendi@lemmy.zip on 05 Sep 06:45 collapse

Only ChatGPT is obsessed with bullet points like this. I’m pretty damn sure this is an LLM response

hungryphrog@lemmy.blahaj.zone on 04 Sep 08:29 next collapse

Robots don’t demand things like “fair wages” or “rights”. It’s way cheaper for a corporation to, for example, use a plagiarizing artificial unintelligence to make images for something, as opposed to commissioning a human artist who most likely will demand some amount of payment for their work.

Also I think that it’s partially caused by people going “ooh, new thing!” without stopping to think about the consequences of this technology or if it is actually useful.

Kintarian@lemmy.world on 04 Sep 09:48 next collapse

Ok, i am working on a legal case. I asked Copilot to write a demand letter for me and it is pretty damn good.

Fedegenerate@lemmynsfw.com on 04 Sep 12:32 next collapse

As a beginner in self hosting I like plugging the random commands I find online into a llm. I ask it what the command does, what I’m trying to achieve and if it would work…

It acts like a mentor, I don’t trust what it says entirely so I’m constantly sanity checking it, but it gets me to where I want to go with some back and forth. I’m doing some of the problem solving, so there’s that exercise, it also teaches me what commands do and how the flags alter it. It’s also there to stop me making really stupid mistakes that I would have learned the hard way without.

Last project was adding a HDD to my zpool as a mirror. I found the “attach” command online with a bunch of flags. I made what I thought was my solution and asked chatgpt. It corrected some stuff: I didn’t include the name of my zpool. Then gave me a procedure to do it properly.

In that procedure I noticed an inconsistency in how I was naming drives vs how my zpool was naming drives. Asked chat gpt again, I was told I was a dumbass, if thats the naming convention I should probably use that one instead of mine (I was using /dev/sbc and the zpool was using /dev/disk/by-id/). It told me why the zpool might have been configured that way so that was a teaching moment, I’m using usb drives and the zpool wants to protect itself if the setup gets switched around. I clarified the names and rewrote the command, not really chatgpt was constantly updating the command as we went… Boom I have mirrored my drives, I’ve made all my stupid mistakes in private and away from production, life is good.

BugleFingers@lemmy.world on 04 Sep 12:47 next collapse

IIRC When ChatGPT was first announced I believe the hype was because it was the first real usable interface a layman could interact with using normal language and have an intelligible response from the software. Normally to talk with computers we use their language (programming) but this allowed plain language speakers to interact and get it to do things with simple language in a more pervasive way than something like Siri for instance.

This then got over hyped and over promised to people with dollars in their eyes at the thought of large savings from labor reduction and capabilities far greater than it had. They were sold a product that has no real “product” as it’s something most people would prefer to interact with on their own terms when needed, like any tool. That’s really hard to sell and make people believe they need it. So they doubled down with the promise it would be so much better down the road. And, having spent an ungodly amount into it already, they have that sunken cost fallacy and keep doubling down.

This is my personal take and understanding of what’s happening. Though there’s probably more nuances, like staying ahead of the competition that also fell for the same promises.

Kanda@reddthat.com on 04 Sep 13:32 next collapse

There is no artificial intelligence, just very large statistical models.

Kintarian@lemmy.world on 04 Sep 14:04 next collapse

It’s easier for the marketing department. According to an article, it’s neither artificial nor intelligent.

Feathercrown@lemmy.world on 04 Sep 15:07 collapse

In what way is it not artificial

Kintarian@lemmy.world on 04 Sep 15:15 collapse

Artificial intelligence (AI) is not artificial in the sense that it is not fake or counterfeit, but rather a human-created form of intelligence. AI is a real and tangible technology that uses algorithms and data to simulate human-like cognitive processes.

Feathercrown@lemmy.world on 04 Sep 15:48 next collapse

I’m generally familiar with “artificial” to mean “human-created”

Kintarian@lemmy.world on 04 Sep 15:55 collapse

Humans created cars and cars are real. I tried to get some info from the Wired article but they pawalled me.

Feathercrown@lemmy.world on 04 Sep 15:56 collapse

“Artificial” doesn’t mean “fake”, it usually means “human made”

Kintarian@lemmy.world on 04 Sep 16:10 next collapse

That’s what Gemini said.

Kintarian@lemmy.world on 04 Sep 16:31 collapse

Found a link to Kate Crawford’s research. The quote is near the bottom of the article. It’s interesting, anyway.

canadaduane@lemmy.ca on 05 Sep 00:06 collapse

Is human intelligence artificial? #philosophy

Kintarian@lemmy.world on 05 Sep 00:57 collapse

Well, using the definition that artificial means man made then no. Human intelligence wasn’t made by humans therefore it isn’t artificial.

canadaduane@lemmy.ca on 18 Sep 04:29 collapse

I wonder if some of our intelligence is artificial. Being able to drive directly to any destination, for example, with a simple cell-phone lookup. Reading lifetimes worth of experience in books that doesn’t naturally come at birth. Learning incredibly complex languages that are inherited not by genes, but by environment–and, depending on the language, being able to distinguish different colors.

Kintarian@lemmy.world on 18 Sep 04:54 collapse

From the day I was born, my environment shaped what I thought and felt. Entering the school system I was indoctrinated into whatever society I was born to. All of the things that I think I know are shaped by someone else. I read a book and I regurgitate its contents to other people. I read a post online and I start pretending that it’s the truth when I don’t actually know. How often do humans actually have an original thought? Most of the time we’re just regurgitating things that we’ve experienced, read, or heard from exteral foces rather than coming up with thoughts on our own.

Daxtron2@startrek.website on 04 Sep 17:40 next collapse

Artificial intelligence is a branch of computer science. Of which, LLMs are objectively a part of.

5gruel@lemmy.world on 05 Sep 06:19 collapse

When will people finally stop parroting this sentence? It completely misses the point and answers nothing.

Kanda@reddthat.com on 06 Sep 06:28 collapse

Where’s the intelligence in suggesting glue in pizza? Or is it just copying random stuff and guessing what comes next like a huge phone keyboard app?

Bookmeat@lemmy.world on 04 Sep 14:56 next collapse

Novelty, lack of understanding, and avarice.

Feathercrown@lemmy.world on 04 Sep 15:17 next collapse

Disclaimer: I’m going to ignore all moral questions here

Because it represents a potentially large leap in the types of problems we can solve with computers. Previously the only comparable tool we had to solve problems were algorithms, which are fast, well-defined, and repeatable, but cannot deal with arbitrary or fuzzy inputs in a meaningful way. AI excels at dealing with fuzzy inputs (including natural language, which was a huge barrier previously), at the expense of speed and reliability. It’s basically an entire missing half to our toolkit.

Be careful not to conflate AI in general with LLMs. AI is usually implemented as Machine Learning, which is a method of fitting an output to training data. LLMs are a specific instance of this that are trained on language (hence, large language models). I suspect that if AI becomes more widely adopted, most users will be interacting with LLMs like you are now, but most of the business benefit would come from classifiers that have a more restricted input/output space. As an example, you could use ML to train an AI that can be used to detect potentially suspicious bank transactions. The more data you have to sort through, the better AI can learn from it*, so I suspect the companies that have been collecting terabytes of data will start using AI to try to analyze it. I’m curious if that will be effective.

*technically it depends a lot on the training parameters

Kintarian@lemmy.world on 04 Sep 15:53 collapse

I suppose it depends on the data you’re using it for. I can see a computer looking through stacks data in no time.

mjhelto@lemm.ee on 04 Sep 15:22 next collapse

It amazed people when it first launched and capitalists took that to mean replace all their jobs with AI. Where we wanted AI to make shit jobs easier, they used it to replace whole swaths of talent across the industry’s. Recent movies read like they were written almost entirely by AI. Like when Cartman was a robot and kept giving out terrible movie ideas.

Juice@midwest.social on 04 Sep 16:43 next collapse

The last big fall in the price of bitcoin, in December '22 was caused by a shift in the dynamics of mining where it became more expensive to mine new btc than what the coin was actually worth. Not only did this plunge the price of crypto it also demolished demand for expensive graphics chips which are repurposed to run the process-heavy complex math used in mining. Cheaper chips, cascading demand and server space that was dedicated to mining related activities threatened to wipe out profit margins in multiple tech sectors.

6 months later, Chat GPT is tolled out by Open AI. The previous limitations on processing capabilities were gone, server space was cheap and the tech was abundant. So all these tech sectors at risk of losing their ass in an overproduction driven recession, now had a way to pump the price of their services and this was to pump AI.

Additionally around this time the world was recovering from covid lockdowns. Increased demand for online services was dwindling (exacerbating the other crisis outlined above) as people were returning to work and spending more time being social IRL rather than using services. Companies had hired lots of new workers: programmers, tech infrastructure workers, etc., yo meet the exploding demand during covid. Now they had too many workers and their profits were being threatened.

The Federal reserve had raised interest rates to stifle continued hiring of new employees. The solution that the fed had come up with in order to stifle inflation was to encourage laying off workers end masse – what Marxists might call restoring the reserve army of labor, or relative surplus population – which was substantially depleted during the pandemic. But business owners were reluctant to do this, the tight labor market of the last few years had made business owners and managers skittish about letting people go.

A basic principle at play here, is that new technology is introduced for two reasons only: to sell as a new commodity and (what we are principally concerned with) replacing workers with machines. Another basic principle is that the capitalist system has to have a certain percentage of its population unemployed and hyper exploited in order to keep wages low.

So there was a confluence of incentives here. 1. Inexpensive server space and chips which producers were eager to restore to profitability (or else face drastic consequences) 2. A need to lay off workers in order to stop inflation 3. Incentives for businesses to do so.

Laying off relatively highly paid technical/intellectual labor is a low hanging fruit in this whole equation, and the roll out of AI did just that. Hundreds of thousands of highly paid workers were laid off across a variety of sectors, assured that AI would create so much more efficiency and cut out the need for so many of these workers. So they rolled out this garbage tech that doesn’t work, but everyone in the industry, the media, the government needs it to work, or else they face a massive economic crisis, which had already started with inflation.

At the end of the day its just a massive grift, pushed out to compensate for excessive overproduction driven by another massive grift (cryptocurrency) combined with economic troubles that arose from an insufficient government response to a pandemic that killed millions of people; and rather than take other measures to stifle inflation our leaders in global finance decided to shunt the consequences onto workers, as always. The excuse given was AI, which is nothing more than a predictive text algorithm attached to a massive database created by exploited workers overseas and stolen IPs, and a fuck load of processing power.

Kintarian@lemmy.world on 04 Sep 16:54 next collapse

I hope someday we can come up with an economic system that is not based purely on profit and the exploitation of human beings. But I don’t know that I’ll live long enough to see it.

Juice@midwest.social on 04 Sep 17:59 next collapse

Well remember that the shifts that can happen in material conditions and consciousness can happen very quickly. We can’t decide when that is, but we can prepare and build trust until it does occur. Hard to imagine what it would take in the west to see an overthrow of capitalism, all we can do is throw our weight behind where it will have the most effect, hopefully where our talents reside also! Stay optimistic, despite even evidence to the contrary. For the capitalists, its better to believe that the end of the world is coming than to believe a new world is possible. So if nothing else lets give em hell

z00s@lemmy.world on 05 Sep 01:26 collapse

I can’t tell you how many times I’ve had this exact thought. 😕

abbadon420@lemm.ee on 04 Sep 20:12 next collapse

That is a very pessimistic and causal explanation, but you’ve got the push right. It’s marketing that pushes I though, not necessarily tech. AI, as we currently see it in use, is a very neat technological development. Even more so it is a scientific development, because it isn’t just some software, it is a intricate mathematical model. It is such a complex model, that we actually have study it how it even works,because we don’t now the finer details.

It is not a replacement for office workers, it is not the robot revolution and it is not godlike. It is just a mathematical model on a previously unimaginable scale.

Juice@midwest.social on 04 Sep 21:12 next collapse

“Pessimistic and casual”? You’re gonna make me self conscious.

I’m an AI skeptic. Its too energy hungry and its not doing anything except scraping massive amounts of consumer data. No its not going to replace workers (because it doesn’t work), but then again countless workers were already laid off so it already served its purpose there. Doesn’t have to replace them, just has to purge them but in a systematic way, such that the Fed called for when they started raising interest rates.

Are you an AI Scientist/engineer? If so I’d love to hear more about your work. I’m in tech myself but def not on the bleeding edge of AI.

Eranziel@lemmy.world on 05 Sep 06:00 collapse

Machine learning has many valid applications, and there are some fields genuinely utilizing ML tools to make leaps and bounds in advancements.

LLMs, aka bullshit generators, which is where a huge majority of corporate AI investment has gone in this latest craze, is one of the poorest. Not to mention the steaming pile of ethical issues with training data.

deafboy@lemmy.world on 04 Sep 23:45 next collapse

We’ve already established that language models just make shit up. There is no need to demonstrate. Bad bot!

Juice@midwest.social on 05 Sep 00:08 collapse

Excuse me? Are you calling me a bot?

I remember learning about Turing tests to determine whether speech was coming from a machine. Its ironic that in practice its much more common for people to not be able to recognize even a real person.

deafboy@lemmy.world on 05 Sep 07:29 collapse

It’s just that I rarely see a real person be so confidently wrong.

Juice@midwest.social on 05 Sep 11:13 collapse

Care to elaborate?

canadaduane@lemmy.ca on 05 Sep 00:38 next collapse

I appreciate the candid analysis, but perhaps “nothing to see here” (my paraphrase) is only one part of the story. The other part is that there is genuine innovation and new things within reach that were not possible before. For example, personalized learning–the dream of giving a tutor to each child, so we can overcome Bloom’s 2 Sigma Problem–is far more likely with LLMs in the picture than before. It isn’t a panacea, but it is certainly more useful than cryptocurrency kept promising to be IMO.

Juice@midwest.social on 05 Sep 01:56 collapse

Again, I am highly skeptical that this technology (or any other) can be deployed for such a worthy social mission. I have a cousin who works for a company that produces educational materials for people who need a lot of accommodation, so I know that there are definitely good people in those fields who have the ability, and probably desire, to deploy this tech responsibly and progressively in a manner that helps fulfill that and similar missions, but when I look at things systemically I just don’t see the incentive structures to do so. I won’t deny being a skeptic of AI, especially since my personal and professional experience with it has been like dramatically underwhelming. I’d love to believe things work better than they do, that they even could but with ai I see a lot of promises and nothing in the way of results, outside of modestly entertaining tricks. Although I gotta admit, stable diffusion is really cool. Commercially I think its dogshit but the way it creates the images is fascinating.

canadaduane@lemmy.ca on 18 Sep 04:42 collapse

What would a good incentive structure look like? For example, would working with public school districts and being paid by them to ensure safe learning experiences count? Or are you thinking of something else?

z00s@lemmy.world on 05 Sep 01:13 next collapse

Are you an economist or business professor IRL? Because that was an amazing answer!

Juice@midwest.social on 05 Sep 01:44 collapse

No actually I’m mostly self educated. I’m just a tech worker who studies history, social theory and economics, but also does some political organizing. So take it with a grain of salt if you must.

Glad you got something from it, I appreciate the compliment!

Eranziel@lemmy.world on 05 Sep 05:55 collapse

Very nice writeup. My only critique is the need to “lay off workers to stop inflation.” I have no doubt that some (many?) managers etc… believed that to be the case, but there’s rampant evidence that the spike of inflation we’ve seen over this period was largely due to corporate greed hiking prices, not due to increased costs from hiring too many workers.

Juice@midwest.social on 05 Sep 06:47 collapse

Exactly! the two things are the same phenomenon expressing in two different ways! This is exactly why this is such a mindfuck.

Follow my logic: in the usa by 2022, covid19 had killed over a million people. When you compare this to the total unemployed in the US, that’s not just the governments padded numbers but adding together all the people in prisons, people who stopped looking for work, etc., those covid deaths were about 12% of that unemployed “surplus” population. Again, the system needs a certain number of people to be unemployed, over a million people died, which means over a million “jobs” (this includes employed and unemployed positions within the entire workforce.) At the time the media was calling it “the great resignation,” where employees were just going out and getting better jobs. But where did these jobs come from? Can you really just go out and get a better job any time you want? Of course not. Try searching for a job now, good fucking luck.

Seriously, google “reserve army of labor” if you haven’t already, it explains everything. So as the labor market tightens, consumption increases. People got a better job and can fix their credit up in a few months and get a loan on a car maybe for the first time. People are walking out of the grocery store with more food, or going out to eat more. Retailers notice this and raise prices in response to increased spending. this is a phenomena that Marx wrote about in value price and profit, which I might mention again.

So why were prices going up? Larry Summers gets in front of Jon Stewart and says that increase in spending equals increase in demand, when demand challenges supply then prices go up! Which is what we are generally taught. Except Marx proved that this was not the case, that inflation really was just retailers raising prices due to increase in consumer spending. Its a bit of economic slight of hand that I could explain if you want but for now I’m already long.

The federal reserve says that inflation (which is like you said, mostly driven by companies raising prices to squeeze consumers, and this is proven by the way the fed responds) is out of control, so therefore they are raising interest rates. The way this will control inflation is by making it harder and more expensive for companies to get money for large capital investments. This is all to squeeze the companies to stop hiring (since their p&l is negatively affected) and eliminate excess staff. But the companies are reluctant to let people go/stop hiring because of what they just experienced with a “tight” labor market. They have the incentives or pressures, but they need an excuse, they need a justification. Enter automation with ai. Finally the automation revolution that the media has been threatening workers with for decades is here and sorry can’t halt progress you see (Ned Ludd did nothing wrong.)

Except it isnt all that. In the mean time the economy has adjusted to the depleted reserve population, the corpos were given everything they wanted or needed in order to continue to profit after the death of millions, and a new grift industry has grown up and attracted all this funding and following and clout. Didn’t even have to lose that many jobs, just a bunch of high paid ones. Except interest rates are still elevated so the fed is continuing to keep that pressure on the labor market. Anyway, there’s all of these cascading effects, from systems interacting with each other; therefore its more useful to understand the relation between phenomenon than it often is to try and understand that phenomena on its own.

So you’re right, it was corporate policy, but it isn’t greed necessarily. Definitely greed adjacent though, its like systematic greed. There are incentives and disincentives present within the system. Karl Marx was able to write about the causes of inflation 150 years ago, and they were using the same faulty excuses then. That’s also why the fed decided to raise interest rates, they understood what the problem was, and the fix is and always has been to throw people into unemployment. The system is predictable, but it isn’t rational.

aesthelete@lemmy.world on 05 Sep 02:58 next collapse

Tech company management loves the idea of ridding themselves of programmers and other knowledge workers, and AI companies love selling the idea of non-productivity impacting layoffs to unsavvy companies (tech and otherwise).

just_an_average_joe@lemmy.dbzer0.com on 05 Sep 07:41 next collapse

Mooooneeeyyyy

I work as an AI engineer, let me tell you, the tech is awesome and has a looooot of potential but its not ready yet. Because of high potential literally no one wants to miss the opportunity of getting rich quick with it. Its only been like 2-3 years when this tech was released to the public, if only openai had released it as open-source, just like everyone before them, we wouldn’t be here. But they wanted to make money and now everyone else wants to too.

LarmyOfLone@lemm.ee on 05 Sep 09:55 collapse

Look at all the comments on this post. We’re not quite there but imagine half of the comments written by Chat GPT and it’s only going to get better.

Does it matter than 50% of them get it wrong?

OpenStars@discuss.online on 05 Sep 11:56 collapse

To advertisers? No.

To the platform designers? Also no.

To idiot users? Still no.

To non-idiot users? Surprisingly no (bc we already left and are here now:-).

To people wanting Reddit to go the distance and boost their stock values, yes. But only in the long-term, which never exists, and in the short-term, no.

Hence, enshittification, delivered in a confident tone.