Testing suggests Google's AI Overviews tell millions of lies per hour
(arstechnica.com)
from madeindex@lemmy.world to world@lemmy.world on 07 Apr 21:15
https://lemmy.world/post/45310001
from madeindex@lemmy.world to world@lemmy.world on 07 Apr 21:15
https://lemmy.world/post/45310001
cross-posted from: lemmy.world/post/45309948
**
90% of the time right means 10% of the time wrong, huge deal when you deal with billions of queries! **
#world
threaded - newest
Why is Google so insistent on forcing people to read their LLM output? Even their HTML seems intentionally designed to stop people from filtering it from search results. I don’t understand what they gain from doing this.
Investment.
All of this AI hype is focused around convincing investors that it is of immense value and that [insert company here] is going to be well-positioned when AI takes over. Us poors are not the target audience, we’re just pawns that are pushed into using AI to “prove” to investors that it is useful.
If the product looks “free”, then YOU are the real product.
Im a proud product of linux
Hehe aren’t we all:-P
Because it drives the number of users up, and more users = more money
Although by stopping users right after the search with a scraped LLM answer, they won’t go to other sites like they used to which could serve google ads for them, resulting in less money. Not to even mention the long term issue that with no more traffic or revenue, the websites the AI uses for information will die, making the AI useless.
True. What I was oversimplifying was that they need to drive the number of AI users up by inflating them through forced use.
Much like how companies demand employees to integrate AI in their workflow to show that they have AI users in the workplace
This then inflates the value of AI products, which makes more money for the sellers in that regard
Just a cycle of bullshit to drive AI investment and sales
Interesting! I feel like they want the people to stay longer on their platform instead of going on the websites?