Try to train a human comedian to make jokes without ever allowing him to hear another comedian's jokes, never watching a movie, never reading a book or magazine, never watching a TV show. I expect the jokes would be pretty weak.
Technology
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
A comedian isn't forming a sentence based on what the most probable word is going to appear after the previous one. This is such a bullshit argument that reduces human competency to "monkey see thing to draw thing" and completely overlooks the craft and intent behind creative works. Do you know why ChatGPT uses certain words over others? Probability. It decided as a result of its training that one word would appear after the previous in certain contexts. It absolutely doesn't take into account things like "maybe this word would be better here because the sound and syllables maintains the flow of the sentence".
Baffling takes from people who don't know what they're talking about.
I wish I could upvote this more than once.
What people always seem to miss is that a human doesn't need billions of examples to be able to produce something that's kind of "eh, close enough". Artists don't look at billions of paintings. They look at a few, but do so deeply, absorbing not just the most likely distribution of brushstrokes, but why the painting looks the way it does. For a basis of comparison, I did an art and design course last year and looked at about 300 artworks in total (course requirement was 50-100). The research component on my design-related degree course is one page a week per module (so basically one example from the field the module is about, plus some analysis). The real bulk of the work humans do isn't looking at billions of examples: it's looking at a few, and then practicing the skill and developing a process that allows them to convey the thing they're trying to express.
If the AI models were really doing exactly the same thing humans do, the models could be trained without any copyright infringement at all, because all of the public domain and creative commons content, plus maybe licencing a little more, would be more than enough.
Exactly! You can glean so much from a single work, not just about the work itself but who created it and what ideas were they trying to express and what does that tell us about the world they live in and how they see that world.
This doesn't even touch the fact that I'm learning to draw not by looking at other drawings but what exactly I'm trying to draw. I know at a base level, a drawing is a series of shapes made by hand whether it's through a digital medium or traditional pen/pencil and paper. But the skill isn't being able replicate other drawings, it's being able to convert something I can see into a drawing. If I'm drawing someone sitting in a wheelchair, then I'll get the pose of them sitting in the wheelchair but I can add details I want to emphasise or remove details I don't want. There's so much that goes into creative work and I'm tired of arguing with people who have no idea what it takes to produce creative works.
It seems that most of the people who think what humans and AIs do is the same thing are not actually creatives themselves. Their level of understanding of what it takes to draw goes no further than "well anyone can draw, children do it all the time". They have the same respect for writing, of course, equating the ability to string words together to write an email, with the process it takes to write a brilliant novel or script. They don't get it, and to an extent, that's fine - not everybody needs to understand everything. But they should at least have the decency to listen to the people that do get it.
That’s what humans do, though. Maybe not probability directly, but we all know that some words should be put in a certain order. We still operate within standard norms that apply to aparte group of people. LLM’s just go about it in a different way, but they achieve the same general result. If I’m drawing a human, that means there’s a ‘hand’ here, and a ‘head’ there. ‘Head’ is a weird combination of pixels that mostly look like this, ‘hand’ looks kinda like that. All depends on how the model is structured, but tell me that’s not very similar to a simplified version of how humans operate.
Yeah but the difference is we still choose our words. We can still alter sentences on the fly. I can think of a sentence and understand verbs go after the subject but I still have the cognition to alter the sentence to have the effect I want. The thing lacking in LLMs is intent and I'm yet to see anyone tell me why a generative model decides to have more than 6 fingers. As humans we know hands generally have five fingers and there's a group of people who don't so unless we wanted to draw a person with a different number of fingers, we could. A generative art model can't help itself from drawing multiple fingers because all it understands is that "finger + finger = hand" but it has no concept on when to stop.
There's this linguistic problem where one word is used for two different things, it becomes difficult to tell them apart. "Training" or "learning" is a very poor choice of word to describe the calibration of a neural network. The actor and action are both fundamentally different from the accepted meaning. To start with, human learning is active whereas machining learning is strictly passive: it's something done by someone with the machine as a tool. Teachers know very well that's not how it happens with humans.
When I compare training a neural network with how I trained to play clarinet, I fail to see any parallel. The two are about as close as a horse and a seahorse.
I will repeat what I have proffered before:
If OpenAI stated that it is impossible to train leading AI models without using copyrighted material, then, unpopular as it may be, the preemptive pragmatic solution should be pretty obvious, enter into commercial arrangements for access to said copyrighted material.
Claiming a failure to do so in circumstances where the subsequent commercial product directly competes in a market seems disingenuous at best, given what I assume is the purpose of copyrighted material, that being to set the terms under which public facing material can be used. Particularly if regurgitation of copyrighted material seems to exist in products inadequately developed to prevent such a simple and foreseeable situation.
Yes I am aware of the USA concept of fair use, but the test of that should be manifestly reciprocal, for example would Meta allow what it did to MySpace, hack and allow easy user transfer, or Google with scraping Youtube.
To me it seems Big Tech wants its cake and to eat it, where investor $$$ are used to corrupt open markets and undermine both fundamental democratic State social institutions, manipulate legal processes, and undermine basic consumer rights.
Agreed.
There is nothing "fair" about the way Open AI steals other people's work. ChatGPT is being monetized all over the world and the large number of people whose work has not been compensated will never see a cent of that money.
At the same time the LLM will be used to replace (at least some of ) the people who created those works in the first place.
Tech bros are disgusting.
At the same time the LLM will be used to replace (at least some of ) the people who created those works in the first place.
This right here is the core of the moral issue when it comes down to it, as far as I'm concerned. These text and image models are already killing jobs and applying downward pressure on salaries. I've seen it happen multiple times now, not just anecdotally from some rando on an internet comment section.
These people losing jobs and getting pay cuts are who created the content these models are siphoning up. People are not going to like how this pans out.
Tech bros are disgusting.
That's not even getting into the fraternity behavior at work, hyper-reactionary politics and, er, concerning age preferences.
The problem is not the use of copyrighted material. The problem is doing so without permission and without paying for it.
Some relevant comments from Ars:
leighno5
The absolute hubris required for OpenAI here to come right out and say, 'Yeah, we have no choice but to build our product off the exploitation of the work others have already performed' is stunning. It's about as perfect a representation of the tech bro mindset that there can ever be. They didn't even try to approach content creators in order to do this, they just took what they needed because they wanted to. I really don't think it's hyperbolic to compare this to modern day colonization, or worker exploitation. 'You've been working pretty hard for a very long time to create and host content, pay for the development of that content, and build your business off of that, but we need it to make money for this thing we're building, so we're just going to fucking take it and do what we need to do.'
The entitlement is just...it's incredible.
4qu4rius
20 years ago, high school kids were sued for millions & years in jail for downloading a single Metalica album (if I remember correctly minimum damage in the US was something like 500k$ per song).
All of a sudden, just because they are the dominant ones doing the infringment, they should be allowed to scrap the entire (digital) human knowledge ? Funny (or not) how the law always benefits the rich.
What's stopping AI companies from paying royalties to artists they ripped off?
Also, lol at accounts created within few hours just to reply in this thread.
The moment their works are the one that got stolen by big companies and driven out of business, watch their tune change.
Edit: I remember when Reddit did that shitshow, and all the sudden a lot of sock / bot accounts appeared. I wasn't expecting it to happen here, but I guess election cycle is near.
Money is not always the issue. FOSS software for example. Who wants their FOSS software gobbled up by a commercial AI regardless. So there are a variety of issues.
It's crazy how everyone is suddenly in favour of IP law.
IP law used to stop corporations from profiting off of creators' labor without compensation? Yeah, absolutely.
IP law used to stop individuals from consuming media where purchases wouldn't even go to the creators, but some megacorp? Fuck that.
I'm against downloading movies by indie filmmakers without compensating them. I'm not against downloading films from Universal and Sony.
I'm against stealing food from someone's garden. I'm not against stealing food from Safeway.
If you stop looking at corporations as being the same as individuals, it's a very simple and consistent viewpoint.
IP law shouldn't exist, but if it does it should only exist to protect individuals from corporations. When that's how it's being used, like here, I accept it as a necessary evil.
I'm not so much in favor of IP law as I am in favor of informed consent in every aspect of the word.
when posting photos, art and text content years ago, I was not able to imagine it might be trained off by an AI. As such I was not able to make a decision based on informed consent if I agreed to that or not.
Even though quotes such as "once you post it, its on the internet forever" were around, I was not aware the extend to which this reached and that had my art been vacuumed by a generative AI model (it hasnt luckily) people could create art that pretends to be created by me. Thus I could not consent
I think this goes for a lot of artists actually, especially those who exist far more publicly than I do, who are in those databases and who are a keyword to be used in prompts. There is no possible way they could have given informed consent to that at the time they posted art/at the time they started that social media profile/youtube channel etc.
To me, this is the real problem. I could care less about corporations.
I still think IP needs to eat shit and die. Always has, always will.
I recently found out we could have had 3d printing 20 years earlier but patents stopped that. Cocks !
Any reasonable person can reach the conclusion that something is wrong here.
What I'm not seeing a lot of acknowledgement of is who really gets hurt by copyright infringement under the current U.S. scheme. (The quote is obviously directed toward the UK, but I'm reasonably certain a similar situation exists there.)
Hint: It's rarely the creators, who usually get paid once while their work continues to make money for others.
Let's say the New York Times wins its lawsuit. Do you really think the reporters who wrote the infringed-upon material will be getting royalty checks to be made whole?
This is not OpenAI vs creatives. OK, on a basic level it is, but expecting no one to scrape blogs and forum posts rather goes against the idea of the open internet in the first place. We've all learned by now that what goes on the internet stays there, with attribution totally optional unless you have a legal department. What's novel here is the scale of scraping, but I see some merit to the "transformational" fair-use defense given that the ingested content is not being reposted verbatim.
This is corporations vs corporations. Framing it as millions of people missing out on what they'd have otherwise rightfully gotten is disingenuous.
This isn't about scraping the internet. The internet is full of crap and the LLMs will add even more crap to it. It will shortly become exponentially harder to find the meaningful content on the internet.
No, this is about dipping into high quality, curated content. OpenAI wants to be able to use all existing human artwork without paying anything for it, and then flood the world with cheap knockoff copies. It's that simple.
Having read through these comments, I wonder if we've reached the logical conclusion of copyright itself.
copyright has become a tool of oppression. Individual author's copyright is constantly being violated with little resources for them to fight while big tech abuses others work and big media uses theirs to the point of it being censorship.
Perhaps a fair compromise would be doing away with copyright in its entirety, from the tiny artists trying to protect their artwork all the way up to Disney, no exceptions. Basically, either every creator has to be protected, or none of them should be.
IMO the right compromise is to return copyright to its original 14 year term. OpenAI can freely train on anything up to 2009 which is still a gigantic amount of material while artists continue to be protected and incentivized.
- This is not REALLY about copyright - this is an attack on free and open AI models, which would be IMPOSSIBLE if copyright was extended to cover the case of using the works for training.
- It's not stealing. There is literally no resemblance between the training works and the model. IP rights have been continuously strengthened due to lobbying over the last century and are already absurdly strong, I don't understand why people on here want so much to strengthen them ever further.
There is literally no resemblance between the training works and the model.
This is way too strong a statement when some LLMs can spit out copyrighted works verbatim.
https://www.404media.co/google-researchers-attack-convinces-chatgpt-to-reveal-its-training-data/
A team of researchers primarily from Google’s DeepMind systematically convinced ChatGPT to reveal snippets of the data it was trained on using a new type of attack prompt which asked a production model of the chatbot to repeat specific words forever.
Often, that “random content” is long passages of text scraped directly from the internet. I was able to find verbatim passages the researchers published from ChatGPT on the open internet: Notably, even the number of times it repeats the word “book” shows up in a Google Books search for a children’s book of math problems. Some of the specific content published by these researchers is scraped directly from CNN, Goodreads, WordPress blogs, on fandom wikis, and which contain verbatim passages from Terms of Service agreements, Stack Overflow source code, copyrighted legal disclaimers, Wikipedia pages, a casino wholesaling website, news blogs, and random internet comments.
Beyond that, copyright law was designed under the circumstances where creative works are only ever produced by humans, with all the inherent limitations of time, scale, and ability that come with that. Those circumstances have now fundamentally changed, and while I won't be so bold as to pretend to know what the ideal legal framework is going forward, I think it's also a much bolder statement than people think to say that fair use as currently applied to humans should apply equally to AI and that this should be accepted without question.
Sorry AIs are not humans. Also executives like Altman are literally being paid millions to steal creator's work.
Agreed on both counts.. Except Microsoft sings a different tune when their software is being "stolen" in the exact same way. They want to have it both ways - calling us pirates when we copy their software, but it's "without merit" when they do it. Fuck'em! Let them play by the same rules they want everyone else to play.
I don’t understand why people on here want so much to strengthen them ever further.
It is about a lawless company doing lawless things. Some of us want companies to follow the spirit, or at least the letter, of the law. We can change the law, but we need to discuss that.
Well in that case maybe chat gpt should just fuck off it doesn’t seem to be doing anything particularly useful, and now it’s creator has admitted it doesn’t work without stealing things to feed it. Un fucking believable. Hacks gonna hack I guess.
...so stop doing it!
This explains what Valve was until recently not so cavalier about AI: They didn't want to hold the bag on copyright matters outside of their domain.
As with many things, the golden rule applies. They who have the gold, make the rules.
Then shutdown your goddamn company until you find a better way.
I think viral outrage aside, there is a very open question about what constitutes fair use in this application. And I think the viral outrage misunderstands the consequences of enforcing the notion that you can't use openly scrapable online data to build ML models.
Effectively what the copyright argument does here is make it so that ML models are only legally allowed to make by Meta, Google, Microsoft and maybe a couple of other companies. OpenAI can say whatever, I'm not concerned about them, but I am concerned about open source alternatives getting priced out of that market. I am also concerned about what it does to previously available APIs, as we've seen with Twitter and Reddit.
I get that it's fashionable to hate on these things, and it's fashionable to repeat the bit of misinformation about models being a copy or a collage of training data, but there are ramifications here people aren't talking about and I fear we're going to the worst possible future on this, where AI models are effectively ubiquitous but legally limited to major data brokers who added clauses to own AI training rights from their billions of users.
It's also "impossible" to have multiple terabytes of media on my homeserver without copyright infringement, so piracy is ok, right!?
O no, wait it actually is possible, it's just more expensive and more work to do it legally (and leaves a lot of plastic trash in form of Blurays and DVDs), just like with AI. But laws are just for poor people, I guess.
Alas, AI critics jumped onto the conclusion this one time. Read this:
Further, OpenAI writes that limiting training data to public domain books and drawings "created more than a century ago" would not provide AI systems that "meet the needs of today's citizens."
It's a plain fact. It does not say we have to train AI without paying.
To give you a context, virtually everything on the web is copyrighted, from reddit comments to blog articles to open source software. Even open data usually come with copyright notice. Open research articles also.
If misled politicians write a law banning the use of copyrighted materials, that'll kill all AI developments in the democratic countries. What will happen is that AI development will be led by dictatorships, and that's absolutely a disaster even for the critics. Think about it. Do we really want Xi, Putin, Netanyahu and Bin Salman to control all the next-gen AIs powering their cyber warfare while the West has to fight them with Siri and Alexa?
So, I agree that, at the end of the day, we'd have to ask how much rule-abiding AI companies should pay for copyrighted materials, and that'd be less than the copyright holders would want. (And I think it's sad.)
However, you can't equate these particular statements in this article to a declaration of fuck-copyright. Tbh Ars Technica disappointed me this time.
I stand by my opinion that AI will be the worst thing humans ever created, and that means it ranks just a bit above religion.
Or, or, or, hear me out:
Maybe their particular approach to making an AI is flawed.
Its like people do not know that there are many different kinds of ways that attempt to do AI.
Many of them do not rely on basically a training set that is the cumulative sum of all human generated content of every imaginable kind.
If it is impossible, either shut down operations or find a way to pay for it.
The real issue is money. How much and how (un)distributed.
Why is it fair/ok that one company can use all this material and make a lot of money off it without paying or even acknowledging others work?
On the flip side AI model could be useful. Maybe the models/weights should be made free just like the content they are trained on. Instead of paying for the model, we should pay for the hosting of the inference (aka. the API)