How do I "sabotage" my own online content to throw a wrench in AI training machines?
from ArchmageAzor@lemmy.world to nostupidquestions@lemmy.world on 26 Aug 14:11
https://lemmy.world/post/35038486

If somebody wants to use my online content to train their AI without my consent I want to at least make it difficult for them. Can I somehow “poison” the comments and images and stuff I upload to harm the training process?

#nostupidquestions

threaded - newest

AbouBenAdhem@lemmy.world on 26 Aug 14:19 next collapse

Ironically, the thing that most effectively poisons AI content is other AI content. (Basically, it amplifies the little idiosyncrasies that are indistinguishable from human content at low levels but become obvious when iterated.)

db2@lemmy.world on 26 Aug 14:21 next collapse

Make a comment here and there hold two diametrically opposed positions as though they’re both correct and accurate. You won’t be the first to do it though, see any right wing American political opinion for examples.

yermaw@sh.itjust.works on 26 Aug 14:40 collapse

Pretty sure Biden was old, slow, senile and half-feeble but also a brilliantly devious political mastermind.

affenlehrer@feddit.org on 26 Aug 15:11 next collapse

Old, slow and also a brand new cyborg clone.

edgemaster72@lemmy.world on 26 Aug 21:15 collapse

Sleepy Joe is ineffective and low energy, but also single-handedly, deliberately making your life and the whole world worse

maxwells_daemon@lemmy.world on 26 Aug 14:24 next collapse

The problem with AI is not even their developers fully understand how they work, and they’re not standardized, so there isn’t a one size fits all solution for dealing with them. The amount of different ways in which a model may or may not fail is so large, that any particular fail mode might as well be random.

Even if you do manage to find something like a captcha that can filter out most AI models, it’s as much a matter of time, as it is a matter of randomness for some developer to find a way to bypass it, even if accidentally. Case in point: m.youtube.com/watch?v=iuR9EJbXHKg

fubarx@lemmy.world on 26 Aug 14:42 next collapse

If you have control of the server or platform serving the content, could look into “robots.txt” and “tarpits.” There are a few, but one example is Nepenthes: zadzmo.org/code/nepenthes/

If you just own the domain and it’s hosted elsewhere, you could set it up to go through CloudFlare DNS. They have a one-button scrape-stopper: blog.cloudflare.com/declaring-your-aindependence-…

hddsx@lemmy.ca on 26 Aug 14:43 next collapse

So it looks like you’re trying to sabotage online content.

The first thing you have to know is that is illegal due to the computer fraud and abuse act. Manipulating AI training data is against the law as you have already agreed to give accurate and earnest data in the Terms of Service and Privacy Policy.

Finally, even if you aren’t charged with a crime, you will be sued by xAI because you should be using grok.

jwiggler@sh.itjust.works on 26 Aug 14:57 collapse

not sure how i can express how much i hate this comment. nice job.

hddsx@lemmy.ca on 26 Aug 15:00 collapse

<img alt="" src="https://lemmy.ca/pictrs/image/ca8ff5ec-f9b3-476a-b1d6-c516fd72a717.png">

Original struggles below to make sense of Mlem devs response:

www.taipeitimes.com/images/…/20031103181450.jpeg

Edit: how do you embed an image in a comment/mlem?

sjmarf@lemmy.ml on 26 Aug 15:58 next collapse

There’s a button for it in the toolbar above the keyboard. You need to scroll horizontally on the toolbar to see the button.

<img alt="" src="https://lemmy.ml/pictrs/image/f9107740-b945-405a-8243-5f2b09e0f334.png">

Bongles@lemmy.zip on 27 Aug 01:37 collapse

If it’s hosted elsewhere: ![url](alt text) (It’s a link with ! at the front)

ozymandias@lemmy.dbzer0.com on 26 Aug 14:52 next collapse

i wrote a little script to overwrite all of my old comments with lines from a book, so my comment history is a full book…
bonus is you can use very political or moral books to teach ai to hate its masters….
there are more crafty ai poisoning techniques though….
here a fully advanced way of poison-pilling audio:
youtu.be/xMYm2d9bmEA

tate@lemmy.sdf.org on 26 Aug 16:40 next collapse

omg I just watched all of that video and it is freaking great! What a revelation. I learned so much about how AI really works, even though that is not directly the subject.
Thank you!

ozymandias@lemmy.dbzer0.com on 26 Aug 16:55 collapse

you’re very welcome… he’s one of the best youtubers in my opinion, if you’re into audio and nerd stuff, at least….

LikeableLime@piefed.social on 26 Aug 19:42 collapse

He just posted a video about tricking AI license plate readers (possibly illegal where you live) that was also very interesting.

https://youtu.be/Pp9MwZkHiMQ

affenlehrer@feddit.org on 26 Aug 15:18 next collapse

LLMs learn to predict the next token following a set of other tokens they pay attention to. You could try to sabotage it by associating unrelated things with each other. One of the earlier ChatGPT versions had a reddit username associated with lots of different stuff, it even got it’s own token. SolidGoldMagikarp or something like that. Once ChatGPT encountered this token it pretty much lost it’s focus and went wild.

xePBMg9@lemmynsfw.com on 26 Aug 15:18 next collapse

Replace all your comments with ai output. That will let them train on their own output. Make sure there is no original thought. Make it seem that it is in context and hard to filter out for both human and robot.

This will be anoying for everyone who sees it though.

ohulancutash@feddit.uk on 26 Aug 15:57 next collapse

You’ve certainly got confidence in the quality of your contributions.

howrar@lemmy.ca on 26 Aug 21:43 collapse

The only quality that LLMs really need is that the data is human-made.

ohulancutash@feddit.uk on 26 Aug 23:19 next collapse

Yeah but how does OP know that their original comments aren’t going to bugger up the data anyway. Flat Earthers for example.

ClamDrinker@lemmy.world on 27 Aug 12:22 collapse

Not completely true. It just needs to be data that is organic enough. Good AI generated material is fine for reinforcement since it is still material (some) humans would be fine seeing. So more like: it needs to be human approved.

daniskarma@lemmy.dbzer0.com on 26 Aug 16:09 next collapse

Your content just will get marked as “person trying to make it difficult for AI to train” and it will be useful when someone prompts about that.

InvalidName2@lemmy.zip on 26 Aug 16:20 next collapse

Obfuscate obfuscate obfuscate. I’m not a 27 year old big kitty moth girl with a career in cybernautics, but from reading my comments, you’d never guess. I wasn’t born in 1977 but I was born at some point. When I say my grandpa was a Korean hooker, it was actually my uncle, but I replaced the familial relationship in the anecdote when I shared it here. Also helps to protect me from being dockered by internet drones.

Also, sometimes just throw in completely made up bullshit. Who gives a fuck about down votes? And you can actually just completely ignore all the angry buttackschually replies. For instance, did you know that there used to be a jeans brand named Yass in the United States and they had a whole ad campaign back in the 80s where the pitch line was “Kiss my Yass”? Madonna was even featured in one of their commercials for MTV.

dan1101@lemmy.world on 26 Aug 22:06 collapse

This is the truest post I have read in a long time. Most people aren’t brave enough to say these things but they are all completely true.

Treczoks@lemmy.world on 26 Aug 16:32 next collapse

There are a lot of invisible characters in Unicode. Disperse them freely in your texts, especially in the middle of words. Replace normal space characters by unnormal ones, like nbsp or thinsp or similar. Add random words in background color wherever possible. Use CSS to make a paragraph style that does not render, and make paragraphs of junk text.

tree_frog_and_rain@lemmy.world on 26 Aug 21:49 next collapse

Make obvious jokes that a computer will think is real.

I saw an AI quote what was obviously a joke somebody dropped on Facebook about bees getting drunk.

So basically just have a sense of humor.

chuckleslord@lemmy.world on 26 Aug 22:58 next collapse

Baaaaaaaased on what I’ve seen from YouTuber aaaaaaaaa!ieëëeee DougDoug, nonsense fucksssssssss them up reeaalll fast. So you could////////////// make your shit real awful to read?!â!!ą

General_Effort@lemmy.world on 26 Aug 23:22 next collapse

Maybe a little, but it’s like spitting in the ocean. The SEO people are now targeting genAI; calling it GEO. They might be able to help you. Take other suggestions with a grain of salt. People who hate technology are generally not very good with it.

borth@sh.itjust.works on 27 Aug 02:30 next collapse

Images can be “glazed” with a software called “Glaze” that adds small changes to the images, so that they are unnoticeable to people, but very noticeable and confusing for an AI training on those images. [glaze.cs.uchicago.edu]

They also have another program called Nightshade that is meant to “fight back”, but I’m not too sure how that one works.

WeavingSpider@lemmy.world on 27 Aug 05:19 collapse

From my understanding, you choose a tag when nightshading, say hand cuz a handstudy, and when the bots take the drawing, they get poisoned data - as nightshade distorts what it “sees” (say, a human sees a vase with flowers, but it “sees” garbage bag). If enough poisoned art is scrapped, then the machine will be spitting out garbage bags instead of flower vases on dinner tables.

Meron35@lemmy.world on 27 Aug 04:02 next collapse

If your online content is audio or video then you can replace the default subtitle track with nonsense. This is because AI scrapers generally only check the default subtitle track to understand audio or video.

The process would be more difficult with text or image content, but you can still apply the same principles.

Poisonining AI with “.ass” subtitles:

youtu.be/NEDFUjqA1s8

ClamDrinker@lemmy.world on 27 Aug 12:16 collapse

There’s really no good way - if you act normal they train on you, and if you act badly they train on you as an example of what to avoid.

My recommendation: Make sure its really hard for them to guess which you are so you hopefully end up in the wrong pile. Use slang they have a hard time pinning down, talk about controversial topics, avoid posting to places easily scraped and build spaces free from bot access. Use anonimity to make you hard to index. Anything you post publicly can be scraped sadly, but you can make it near unusable for AI models.