this post was submitted on 13 Jul 2023
120 points (96.2% liked)
Technology
59575 readers
3078 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Scraping for purposes of indexing a search engine actually requires copying, and presenting those search results actually requires re-distributing portions of the copyrighted work. Search engines have been using various forms of "artificial intelligence" for decades, and their models have survived countless legal challenges.
Training an LLM would be considered an act of consumption, not an act of copying. It is less infringing than indexing, in that the LLM is designed to not regurgitate copies of existing work, but rather to produce novel content.
"Your answer" is wrong.
Google is a member of the public. Microsoft is a member of the public. Meta is a member of the public. Your pervy neighbor is a member of the public. I am a member of the public, as are all your social media friends. If you want to prohibit a member of the public from consuming your content, you cannot post it in a publicly-accessible forum.