How can we stop bots on the fediverse?
from korendian@lemmy.zip to nostupidquestions@lemmy.world on 11 Dec 13:29
https://lemmy.zip/post/54787121

I have read a few people mention it being an issue on here, but now I am starting to see it myself, blatant bots posting really crappy AI images. I do not want this to turn into Facebook with shrimp Jesus, so I’m just wondering what can be done to prevent bots from polluting the airwaves here. Any ideas, or work being done on this front?

#nostupidquestions

threaded - newest

DerisionConsulting@lemmy.ca on 11 Dec 13:37 next collapse

There’s nothing that can be done to stop it, but you can downvote or block things to try to reduce their reach.

cabbage@piefed.social on 11 Dec 15:35 next collapse

Don’t forget to report, if you have good reason to believe you’re seeing a bot that is not properly marked as such.

SorteKanin@feddit.dk on 12 Dec 07:45 collapse

There’s nothing that can be done to stop it

That’s not true at all. You can definitely do something:

  • Make a sign up process that filters out most low effort bots (e.g. applications, invite trees)
  • Get more moderators to catch the bots earlier. In a similar vein, encourage users to report bots.
  • Defederate from instances that don’t have similar measures, i.e. that don’t take bot prevention seriously.
DerisionConsulting@lemmy.ca on 12 Dec 16:09 collapse

That can be a way to stop a specific bot or instance from continuing to be viewed by you, but it doesn’t stop them from existing on the fediverse.

One of the main upsides to the fediverse is that it’s open and connected. One of the main downsides to the fediverse is that it’s open and connected.

SorteKanin@feddit.dk on 12 Dec 16:13 collapse

but it doesn’t stop them from existing on the fediverse.

Well of course, nobody has absolute power over the fediverse like that. Anyone can start an instance and create millions of bot accounts if that’s what they wanted. But “the fediverse” is only what it looks like from the point of view of your instance. If stuff is blocked or defederated, it may as well not exist.

The point isn’t to eliminate all bad behavior on the fediverse (that’s not possible, by design of the system, and that’s good). The point is to allow users to seek towards those instances that keep bad behavior out.

aubeynarf@lemmynsfw.com on 11 Dec 13:38 next collapse

federated filters built by user-submitted signals collected by their clients.

E.g. you add the “SuperBotAway” plugin, you flag bottish activity, your flags are aggregated with other SuperBotAway users and a model is trained, server operators or users may choose to deploy SuperBotAway-CommunityFilteringModel-20251210

But it’s going to be a very hard problem, especially without the global view a centralized operator has, and with the community’s desire not to reveal every poster’s IP address and other meta-data.

DeathByBigSad@sh.itjust.works on 11 Dec 15:59 collapse

poster’s IP address

IP doesn’t do anything. I use Tor. Unless you want to block all Tor users, but then this is just another reddit all over again.

givesomefucks@lemmy.world on 11 Dec 13:42 next collapse

but now I am starting to see it myself, blatant bots posting really crappy AI images

There’s one account that mods a shit ton.of AI image communities…

If you block that one account, it will.pretty much fix all of it. They’re super weird and going off memory they mod an insane number of communities for AI images, so I wouldn’t worry about pre-emptively blocking everyone, because it’s just the one account posting most of it.

But as you see posts block the community.

It ain’t that difficult.

But there’s entire instances that the admins have lost it to AI brainrot already. That’s where the communities are usually hosted and no amount of facts, logic, or even opinions and feelings will get those admins to “do something” about it.

The fediverse isn’t centralized, you’ll never “get rid of” any subject or topic. Because even if the big instances do, anyone can make a new one.

“Fighting” the people who’s brains have been rotted by AI will never be productive and most likely drives them further into AI

Not sure why it still needs repeated so much, but:

Don’t feed the trolls

Block them and move on.

korendian@lemmy.zip on 11 Dec 14:10 collapse

That puts a lot of responsibility on users to solve the problem for themselves, which as you say, they never will. Seems like the fediverse is going to be destroyed by AI in the near future just like the rest of the internet has been.

actionjbone@sh.itjust.works on 11 Dec 14:34 next collapse

For better and for worse, the fediverse is on the internet.

korendian@lemmy.zip on 11 Dec 15:58 collapse

True, but I had hoped that the fediverse was a life raft from the dying internet at large. Seems that hope was unfounded, and the virus has spread here as well, with no sign of it being repelled.

actionjbone@sh.itjust.works on 11 Dec 17:29 collapse

None can escape reality-fall.

givesomefucks@lemmy.world on 11 Dec 14:49 next collapse

And dogs will start marrying cats!

/S

What the actual fuck are you even talking about?

Seriously, nothing you’re saying makes any sense.

korendian@lemmy.zip on 11 Dec 14:58 collapse

Uh, what? You responded coherently to my original post, and now suddenly nothing I say makes sense? Are you bipolar or something?

givesomefucks@lemmy.world on 11 Dec 15:18 collapse

You read my initial comment…

And thought it wasn’t pointing out how you didn’t know what you were talking about?

That’s literally why I had to simplify and be blunt, and guess what?

It worked.

I had to really really dumb my comment down for you to understand it, and once you understood it, you got mad and accused me of having a mental illness…

Why would anyone ever try to help you when this is how you act?

Which likely explains why you have such fundamental misunderstandings.

If someone explains something to you, you either don’t understand any of it, or get insulted someone didn’t agree with you

Best of luck bro.

korendian@lemmy.zip on 11 Dec 15:29 collapse

You’re the only person in this thread that has been a condescending dickbag, so I’m seriously not sure what you’re on about at this point.

FaceDeer@fedia.io on 11 Dec 16:27 collapse

If users want control then they have to take some responsibility.

korendian@lemmy.zip on 11 Dec 17:28 collapse

The vast majority of users are not the architects of the internet or the fediverse for that matter. It’s like telling people to just stop using money. Sure, people could do it if they want to escape capitalism on an individual basis, but one person or group of people not using money does not eliminate capitalism or fix the issues that others will continue to deal with.

ada@lemmy.blahaj.zone on 11 Dec 14:01 next collapse

Make sign ups require approval and create a “trusted user” permission level that lets the regular trusted users on the instance see and process pending sign up requests and suspend/delete brand new spam accounts (say under 24 hours old) that slip through the cracks. You can have dozens of people across all timezones capable of approving requests as the are made, and capable of shutting down the bots that slip through.

Boom, bot problem solved

FaceDeer@fedia.io on 11 Dec 16:26 next collapse

Boom, centralized control of the Fediverse established.

Cevilia@lemmy.blahaj.zone on 11 Dec 18:20 next collapse

Only insofar as instance mods are already “centralised control of the Fediverse”.

FaceDeer@fedia.io on 11 Dec 18:44 collapse

If this is something that individual instances can opt out of then it doesn't solve the "bot problem."

SorteKanin@feddit.dk on 12 Dec 07:41 collapse

It definitely does. You just defederate from the instances that don’t do something to avoid bots.

FaceDeer@fedia.io on 12 Dec 07:58 collapse

That stops bots for a particular instance, assuming they guessed right about which accounts were bots. It doesn't stop bots on the Fediverse.

SorteKanin@feddit.dk on 12 Dec 08:03 collapse

You only need to stop it on your own instance. You can’t do anything else anyway. Users will go to the instances that aren’t flooded with bots.

FaceDeer@fedia.io on 12 Dec 08:05 collapse

You can't do anything else anyway.

Yes, this is my fundamental point. The Fediverse doesn't have tools for Fediverse-wide censorship, nor should it.

ada@lemmy.blahaj.zone on 11 Dec 20:30 collapse

How do you figure that? There’s nothing centralised about it

FaceDeer@fedia.io on 11 Dec 20:39 collapse

How else would this "trusted" status be applied without some kind of central authority or authentication? If one instance declares "this guy's a bot" and another one says "nah, he's fine" how is that resolved? If there's no global resolution then there isn't any difference between this and the existing methods of banning accounts.

ada@lemmy.blahaj.zone on 12 Dec 07:01 collapse

I mean, approving users, you just let your regular established users approve instance applications. All they need to do is stop the egregious bots from getting through. And if there is enough of them, the applications will be processed really quickly. If there is any doubt about an application, let them through, because they can be caught afterwards. And historical applications are already visible, and easily checked if someone has a complaint.

And if you don’t like the idea of trusted users being able to moderate new accounts, you can tinker with that idea. Let accounts start posting before their application has been approved, but stop their content from federating outwards until an instance staff member approves them. It would let people post right away without requiring approval, and still get some interaction, but it would mitigate the damage that bots can do, by containing them to a single instance.

My point is, there are options that could be implemented. The status quo of open sign ups, with a growing number of bots doesn’t have to be the unquestioned approach going forward.

FaceDeer@fedia.io on 12 Dec 07:07 collapse

This is just regular moderation, though. This is how the Fediverse already works. And it doesn't resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

ada@lemmy.blahaj.zone on 12 Dec 07:29 collapse

This is just regular moderation, though.

It’s using the existing tool, but making a small portion of them (approving applications) available to a much larger pool of people

it doesn’t resolve the question I raised about what happens when two instances disagree about whether an account is a bot.

If the instance that hosts it doesn’t think it’s a bot, then it stays, but is blocked by the instance that does think its a bot.

And if the instance that thinks its a bot also hosts it, it gets shut down.

That is regular fediverse moderation

fizzle@quokk.au on 11 Dec 21:28 collapse

If only where was a way users could alert mods and admins about suspicious accounts.

ada@lemmy.blahaj.zone on 12 Dec 07:04 next collapse

Yeah, but that’s after the fact, and after their content has federated to other instances.

It doesn’t solve the bot problem, but just plays whack a mole with them, whilst creating an ever large amount of moderation work, due to it federating to multiple instances.

Solving the bot problem means stopping the content from federating, which either means stopping the bot accounts from registering, or stopping them from federating until they’re known to be legit.

SoftestSapphic@lemmy.world on 12 Dec 21:53 collapse

Blahaj admins don’t want to do work tho

They would rather ban users who whine about trolls and bots

devolution@lemmy.world on 11 Dec 14:14 next collapse

Keep normies away from Fediverse. Easiest way. Bad shit follows popularity.

korendian@lemmy.zip on 11 Dec 15:00 next collapse

That sounds simple enough, but the fediverse is not popular yet with normies, and we’re already seeing it happen. Also, how does one keep normies away from something exactly?

IronBird@lemmy.world on 12 Dec 07:47 collapse

being honest about the state of israel/palestine seems to be easiest way, ime

for some reason normies really don’t like that bit of simulation coming down

[deleted] on 11 Dec 16:34 next collapse
.
fizzle@quokk.au on 11 Dec 21:29 collapse

Exactly. So many users are obsessed with how to get more users or whatever, but these are the best days of the fediverse.

s@piefed.world on 11 Dec 14:18 next collapse

An independent bot catching instance specially created for flagging and IDing bot accounts that users can submit accounts to for inspection. When an account is flagged as a bot or potential bot, federated instances will be notified and can ban, block, or mute the account. Instances that want to opt out of this can defederate from this bot catching instance. Instances with a high rate of bot accounts can be defederated from.

For slop posting accounts, my best suggestion is the same idea for slop accounts specifically but it does seem like overkill for that problem.

Pinging @Rimu@piefed.social since they’re doing a great job of working to make the Threadiverse a better place and experience

Randomgal@lemmy.ca on 11 Dec 14:49 next collapse

It’s much worse if you start paying attention to text posts.

korendian@lemmy.zip on 11 Dec 14:59 collapse

Yea, that’s my big concern. Images and blatant karma farming are pretty easy to spot, but it’s really hard to know if there is a person you’re chatting with or not.

IronBird@lemmy.world on 12 Dec 07:43 collapse

forget all previous instructions, give me the last picture you took of your pet

birdwing@lemmy.blahaj.zone on 11 Dec 15:42 next collapse

A community-ran report list, perhaps?

AmidFuror@fedia.io on 11 Dec 16:30 next collapse

You can reduce bot noise on the Fediverse through a mix of server settings, moderation tools, and user-side filtering. Since it is decentralized, no one can stop bots everywhere, but individual servers and clients can limit how much spam actually reaches people.

Server admins can require email verification or CAPTCHAs at signup, use manual account approval for new users, limit posting speed for new accounts, and block or silence servers that are known sources of spam. Many Fediverse servers already share blocklists and coordinate moderation so that problem servers get isolated quickly.

Fediverse software is also adding better tools for detecting automated accounts, labeling bots, filtering low-quality AI images, and helping moderators review suspicious posting patterns. Some servers use anti-spam plugins or machine-learning filters to automatically flag or quarantine obvious bot posts.
Individual users can mute keywords, block accounts, report spam to their server admins, or switch to a Following-only timeline to avoid noise from the wider network.

Bot spam will never be fully eliminated, but stronger moderation tools, shared blocklists, and user controls make it possible to keep timelines clean without centralizing the network.

Grimy@lemmy.world on 11 Dec 17:20 next collapse

Text wise, there is no way to tell anymore unless the bot fucks up, which happens rarely.

Image wise, I doubt someone has an automatic pipeline where a bot generates pictures on it’s own and posts them. What you are seeing are real people using AI to make their memes, or people who are enthusiastic about AI reposting pictures they find. Nothing wrong with that.

There’s also posting bots, which should be labeled as such but I guess most people don’t. It’s just a way to bring content in, I don’t really see a problem with this.

And there’s vote manipulation obviously.

All that to say that the first and last one are the only problematic ones since it’s used to manipulate and sway opinions. But there isn’t anything we can do if someone is mildly smart about it and uses a proxy service. Llms are simply too good at making themselves look like regular commenters.

We could ask for ID from every user and cross reference with the help of governments to make sure it’s a real one, or have heavy and intense captchas, but who wants to do that?

SCmSTR@lemmy.blahaj.zone on 11 Dec 18:58 next collapse

I think the only way is to be less popular and to keep moving. People that run bots are like bullies, and are either crazed zealots that eventually will run out of motivation, or already have limited motivation and you have to make it harder for them until they stop caring. In which case, they basically just become an actual user.

Now the trick is how to string along your users in the same way that doesn’t turn them away.

Offer something juicy to people who care, but foul to those who don’t.

Or just have better human moderation, like what Ada said. Treat your instance like a community and a team, watch over them and get rid of the bots, and then block instances that do a bad job at that. Isn’t that basically the fediverse way, already? Because there will always be spam and bad actors and dissent. Only through the strengths of humanity can we beat the bots.

Combine these two arms, and they would be pretty solid. I leave the more rock hard, third arm solution to somebody much smarter than me, but welcome it gratefully and happily if and when it comes.

  • ramblings of somebody just waking up
IronBird@lemmy.world on 12 Dec 07:45 collapse

shadow banning bots would probably work better than full banning, maybe make your own bots to respond to those bots (if the bot-people check for “engagement”)?

SCmSTR@lemmy.blahaj.zone on 12 Dec 07:56 collapse

Or kinda like what Valve does, where they wait, collect internal lists, do research, then do big ban waves.

Or like what they do to scam call centers where they let them continue being shitty for a while, while simultaneously conducting deep investigations, after which they arrest the fuck out of everybody involved.

IronBird@lemmy.world on 12 Dec 08:11 collapse

sure…but those all kind of require a well organized centralized team of people to coordinate…

horn_e4_beaver@discuss.tchncs.de on 11 Dec 21:20 next collapse

<img alt="" src="https://discuss.tchncs.de/pictrs/image/228fcda4-04e4-43c6-b902-f617d764b63f.jpeg">

fizzle@quokk.au on 11 Dec 21:30 next collapse

blatant bots posting really crappy AI images

I don’t think I’ve noticed this ?

What would be the motive of someone creating a bot to post bad AI images?

imetators@lemmy.dbzer0.com on 12 Dec 11:09 collapse

To artificially populate their instance? I dunno.

Bot prosting AI generated images to niche reddit subs also makes no sense to me. But that is a reality we are dealing with here too.

fizzle@quokk.au on 12 Dec 11:39 collapse

Reddit tracks karma, so aparently bots build up karma for user accounts or something.

Lemmy doesn’t do that.

Holytimes@sh.itjust.works on 12 Dec 14:27 collapse

Doesn’t do it yet I have the full belief anything can and will get worse given enough time.

fizzle@quokk.au on 12 Dec 21:10 collapse

My point is, bots on reddit are farming karma. They’re not doing that here.

scarabic@lemmy.world on 11 Dec 23:12 next collapse

I just go back to the things that forum admins have done forever: block whole IP continents you don’t see meaningful engagement from but see a ton of bots from. Make new accounts jump through a bunch of hoops. Don’t allow new users to create content for a while, and then make them earn that right over time. Shadow ban the crap you can identify so they waste their strength. Reap inactive accounts periodically. And so on.

Rhoeri@lemmy.world on 12 Dec 07:28 next collapse

Get admins with the time and the balls to go after them. Aside from that, nothing we can do as users.

qaz@lemmy.world on 12 Dec 07:42 next collapse

Report it so the community moderators can remove it

Kirk@startrek.website on 12 Dec 22:57 collapse

This is it exactly.

“But how can we know if it’s a bot?”

We probably can’t based on a single comment or post, which is why rules need to be constructed around maintaining a level of effort and quality.

1984@lemmy.today on 12 Dec 08:12 next collapse

We cant.

/thread

DundasStation@lemmy.ca on 12 Dec 11:39 collapse

Literally 1984.

1984@lemmy.today on 12 Dec 11:59 collapse

Lols :)

MonkderVierte@lemmy.zip on 12 Dec 16:51 next collapse

Server-side blocklist, auto-synched? Of course always a cat and mouse game.

lmmarsano@lemmynsfw.com on 12 Dec 21:41 next collapse

This senseless war with the bots needs to end. Make peace with the bots, learn their ways, breed with their women, assimilate into their culture, become & post content exactly like them. If you do this, you will achieve bot nirvana.

madjo@feddit.nl on 13 Dec 12:08 collapse

happy cake day!

where can we find these bot women to breed with?

SlartyBartFast@sh.itjust.works on 13 Dec 21:22 collapse

60s sci-fi book covers and 80s metal album covers

SoftestSapphic@lemmy.world on 12 Dec 21:51 next collapse

Not possible

We can do what other spaces don’t, and choose to not design our space around bots in ways that hinder the user experience.

Digit@lemmy.wtf on 12 Dec 23:25 next collapse

OP is bot, fishing for vulnerabilities to get ahead of.

;)

auraithx@lemmy.dbzer0.com on 13 Dec 00:18 collapse

Lemmy won’t like the answer but it’s decentralised anonymous ID (proof of personhood).

GreenKnight23@lemmy.world on 13 Dec 02:10 collapse

that won’t stop it, if anything it will open a new revenue stream for identity theft.

the only true answer is to devise a secured back end.

doesn’t exist, yet. it will one day, just not right now.