this post was submitted on 09 Jan 2024
7 points (64.0% liked)

AI Generated Images

7187 readers
99 users here now

Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.

No explicit violence, gore, or nudity.

This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.

Refer to https://lemmynsfw.com/ for any NSFW imagery.

No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.

AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.

To embed images type:

“![](put image url in here)”

Follow all sh.itjust.works rules.


Community Challenge Past Entries

Related communities:

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] merc@sh.itjust.works 4 points 10 months ago (2 children)

Generative AI is based on "predicting" and generating the next token. Tune it one way and it will regurgitate its training data exactly. Tune it the other way and the words it comes up with are nonsense. Tune it just right and it comes up with something that seems creative.

The problem is that the training data is always in there somewhere. It can't generate something in the style of Shakespeare without containing Shakespeare as reference. That's probably fine for Shakespeare which is out of copyright, but if it contains say Stephen King's entire collected works, that's another issue.

If a human writer read all of Stephen King's books then tried to write in the style of King, that would be OK, but that's because a human can't memorize everything King has written word-for-word. When a human reads King, they don't build up a database of "probable next word frequency", instead they build heuristics having to do with how he approaches dialogue, how he reveals character, how he builds tension, etc. They may remember one especially memorable line or two, but the bits they remember, even if written down word-for-word would probably not be enough to be copyright infringing on their own.

I would bet that we've come too far to completely scrap generative AI. Too many billions have been invested, and the companies have too much political power. So, the question is whether there will be significant changes to copyright law. On one side of that fight will be the trillions of dollars behind the entertainment industry. On the other side of that fight will be the trillions of dollars behind the tech industry. Of course, individual artists will be trampled in the process.

[–] mindbleach@sh.itjust.works 4 points 10 months ago

A network containing much of its training set is broken.

Deep networks do find heuristics. That's what all the layers are for. That's why it takes abundant training, instead of abundant storage. We already had computers that can give you the next word of a Stephen King novel... they're called e-books.

Tune AI just right and it'll know that Stephen King writes horror, in English - having distilled both concepts from raw data. Grammar is a demonstration of novel output. The fact these things can conjugate a verb (or count fingers on a hand) is deep magic. There's hints of them being able to do math, which you'd think is trivial for a supercomputer, except it'd have to be doing math roughly the same way you do math.

Anyway: generative LLMs should ideally contain about as much original data per-subject as its Wikipedia article. Key names, general premise, relevant dates, and then enough labels to cobble together some kind of bootleg.

The trouble comes from people making question-answering LLMs, which for obvious reasons are supposed to contain all the details necessary to pass a pop quiz. This is fundamentally at-odds with making shit up. (It's also not very good at answering questions, so they should really focus on training a network that can evaluate text instead of training a network on that text.)

Image AI seems entirely focused on making shit up, which makes the blatant overfitting in MidJourney a head-scratcher. Knowing what Darth Vader looks like is a non-event. Everyone knows what Darth Vader looks like, and everyone knows he correlates strongly with laser-swords. Even being able to draw vaguely cinematic frames is whatever, because it turns out a lot of things look like a lot of other things. But some of those Dune examples are trying to pass a pop quiz. That's just incorrect behavior.

The draw-anything machine should absolutely be able to draw frames that look like they're from Denis Villeneuve's adaptation. Key words: look like. Floppy hair, muted colors, recognizable specific actors, sure. Probably even matching the framing of one shot or other, because again, movies look like movies. But if any specific frame is simply being reproduced, the process has gone wrong. That's simply not what it's for.

[–] criitz@reddthat.com 3 points 10 months ago (2 children)

It seems though that in the long run, the line between a human reading Shakespeare and coming up with their own version and computer doing the same will be thinner and thinner. After all we are really just biological computers. One could imagine a computer "thinking" of things the same "way" that we do. What then?

[–] merc@sh.itjust.works 3 points 10 months ago* (last edited 10 months ago) (1 children)

One could imagine a computer “thinking” of things the same “way” that we do.

One can imagine it, but that's been the impossible nut to crack ever since the first computers. People were saying that artificial intelligence (what we now want to call AGI instead) was 5 years away since the 1970s, if not earlier.

The new generative systems seem intelligent, but they're just really good at predicting the next word. There's no consciousness there. As good as LLMs are, they can't plan for the future. They don't have goals.

The only interesting twist here is that consciousness / free will might not really exist, at least not in the form most people think of it. So, maybe LLMs are closer to being "thinking" computers not because they're getting closer to consciousness / free will, but because we're starting to realize free will was an illusion all along.

[–] criitz@reddthat.com 2 points 10 months ago* (last edited 10 months ago) (1 children)

That's what I mean. We elevate the human thought process as if what we come up with is more valid than what a (future) computer could think up. But is it?

So if a computer synthesizing Shakespeare is stealing, maybe so is a human doing it. But maybe then we could never create anything at all. And if we must not be blocked from it, must a machine?

[–] merc@sh.itjust.works 2 points 10 months ago (1 children)

So if a computer synthesizing Shakespeare is stealing

Copyright infringement is never stealing. But, as to whether it's infringing copyright, the difference is that current laws were designed based on human capabilities. If memorizing hundreds of books word for word was a typical human ability, copyright would probably look very different. Instead, normal humans are only capable of memorizing short passages, but they're capable of spotting patterns, understanding rhythms, and so-on.

The human brain contains something like 100 billion neurons, and many of them are dedicated to things like hearing, seeing, eating, walking, sex, etc. Only a tiny fraction are available for a task like learning to write like Shakespeare or Stephen King. GPT-4 contains about 2 trillion parameters, and every one of them is dedicated to "writing". So, we have to think differently about whether what it's storing is "fair" when it comes to infringing someone's copyright.

Personally, I think copyright is currently more harmful than helpful, so I like that LLMs are challenging the system. OTOH, I can understand how it's upsetting for an artist or a writer to see that SALAMI can reproduce their stuff almost exactly, or produce something in their style so well that it effectively makes them obsolete.

[–] Thermal_shocked@lemmy.world 2 points 10 months ago

You sir are a turing machine.