this post was submitted on 18 Dec 2023
309 points (93.3% liked)

Technology

59657 readers
2703 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Data poisoning: how artists are sabotaging AI to take revenge on image generators::As AI developers indiscriminately suck up online content to train their models, artists are seeking ways to fight back.

top 50 comments
sorted by: hot top controversial new old
[–] gaiussabinus@lemmy.world 69 points 11 months ago (1 children)

This system runs on the assumption that A) massive generalized scraping is still required B) You maintain the metadata of the original image C) No transformation has occurred to the poisoned picture prior to training(Stable diffusion is 512x512). Nowhere in the linked paper did they say they had conditioned the poisoned data to conform to the data set. This appears to be a case of fighting the last war.

[–] sukhmel@programming.dev 16 points 11 months ago

It is likely a typo, but "last AI war" sounds ominous πŸ˜…

[–] Blaster_M@lemmy.world 59 points 11 months ago

Takes image, applies antialiasing and resize

Oh, look at that, defeated by the completely normal process of preparing the image for training

[–] qooqie@lemmy.world 40 points 11 months ago

Unfortunately for them there’s a lot of jobs dedicated to cleaning data so I’m not sure if this would even be effective. Plus there’s an overwhelming amount of data that isn’t β€œpoisoned” so it would just get drowned out if never caught

[–] zwaetschgeraeuber@lemmy.world 28 points 11 months ago (1 children)

nightshade and glaze never worked. its scam lol

[–] kromem@lemmy.world 12 points 11 months ago

Shhhhh.

Let them keep doing the modern equivalent of "I do not consent for my MySpace profile to be used for anything" disclaimers.

It keeps them busy on meaningless crap that isn't actually doing anything but makes them feel better.

[–] Potatos_are_not_friends@lemmy.world 28 points 11 months ago (2 children)

Imagine if writers did the same things by writing gibberish.

At some point, it becomes pretty easy to devalue that content and create other systems to filter it.

[–] scorpious@lemmy.world 9 points 11 months ago

if writers did the same things by writing gibberish.

Aka, β€œX”

[–] books@lemmy.world 2 points 11 months ago (2 children)

I mean isn't that eventually going to happen? Isn't ai going to eventually learn and get trained from ai datasets and small issues will start to propagate exponentially?

I just assume we have a clean dataset preai and messy gross dataset post ai... If it keeps learning from the latter dataset it will just get worse and worse, no?

[–] General_Effort@lemmy.world 3 points 11 months ago

Not really. It's like with humans. Without the occasional reality checks it gets weird, but what people chose to upload is a reality check.

The pre-AI web was far from pristine, no matter how you define that. AI may improve matters by increasing the average quality.

[–] LWD@lemm.ee 2 points 11 months ago* (last edited 10 months ago)
[–] Kushia@lemmy.ml 22 points 11 months ago

Artists and writers should be entitled to compensation for using their works to train these models, just like any other commercial use would. But, you know, strict, brutal free-market capitalism for us, not the mega corps who are using it because "AI".

[–] HejMedDig@feddit.dk 15 points 11 months ago (2 children)

Let's see how long before someone figures out how to poison, so it returns NSFW Images

[–] daxnx01@lemmy.world 6 points 11 months ago (1 children)

You can create NSFW ai images already though?

Or did you mean, when poisoned data is used a NSFW image is created instead of the expected image?

[–] HejMedDig@feddit.dk 8 points 11 months ago

Definitely the last one!

[–] AVincentInSpace@pawb.social 4 points 11 months ago* (last edited 11 months ago) (4 children)

companies would stumble all over themselves to figure out how to get it to stop doing that before going live. source: they already are. see bing image generator appending "ethnically ambiguous" to every prompt it receives

it would be a herculean if not impossible effort on the artists' part only to watch the corpos scramble for max 2 weeks.

when will you people learn that you cannot fight AI by trying to poison it. there is nothing you can do that horny weebs haven't already done.

[–] General_Effort@lemmy.world 3 points 11 months ago

It can only target open source, so it wouldn't bother corpos at all. The people behind this object to not everything being owned and controlled. That's the whole point.

load more comments (3 replies)
[–] kromem@lemmy.world 14 points 11 months ago* (last edited 11 months ago)

This doesn't actually work. It doesn't even need ingestion to do anything special to avoid.

Let's say you draw cartoon pictures of cats.

And your friend draws pointillist images of cats.

If you and your friend don't coordinate, it's possible you'll bias your cat images to look like dogs in the data but your friend will bias their images to look like horses.

Now each of your biasing efforts become noise and not signal.

Then you need to consider if you are also biasing 'cartoon' and 'pointillism' attributes as well, and need to coordinate with the majority of other people making cartoon or pointillist images.

When you consider the number of different attributes that need to be biased for a given image and the compounding number of coordinations that would need to be made at scale to be effective, this is just a nonsense initiative that was an interesting research paper in lab conditions but is the equivalent of a mouse model or in vitro cancer cure being taken up by naturopaths as if it's going to work in humans.

[–] RagingRobot@lemmy.world 11 points 11 months ago (1 children)

So it sounds like they are taking the image data and altering it to get this to work and the image still looks the same just the data is different. So, couldn't the ai companies take screenshots of the image to get around this?

Not even that, they can run the training dataset through a bulk image processor to undo it, because the way these things work makes them trivial to reverse. Anybody at home could undo this with GIMP and a second or two.

In other words, this is snake oil.

[–] uriel238@lemmy.blahaj.zone 10 points 11 months ago* (last edited 11 months ago)

The general term for this is adversarial input, and we've seen published reports about it since 2011 when ot was considered a threat if CSAM could be overlayed with secondary images so they weren't recognized by Google image filters or CSAM image trackers. If Apple went through with their plan to scan private iCloud accounts for CSAM we may have seen this development.

So far (AFAIK) we've not seen adversarial overlays on CSAM though in China the technique is used to deter trackng by facial recognition. Images on social media are overlaid by human rights activists / mischief-makers so that social media pics fail to match secirity footage.

The thing is like an invisible watermark, these processes are easy to detect (and reverse) once users are aware they're a thing. So if a generative AI project is aware that some images may be poisoned, it's just a matter of adding a detection and removal process to the pathway from candidate image to training database.

Similarly, once enough people start poisoning their social media images, the data scrapers will start scaning and removing overlays even before the database sets are sold to law enforcement and commercial interests.

load more comments
view more: next β€Ί