"We need to closec the api in order to protect our users from being used for ai"
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
I mean, they never claimed it was to protect users. It was to protect their user's data from being used without paying Reddit. They didn't like that AI companies were using Reddit content as a free source of training data, they never gave a shit about their users' privacy.
This is also slightly off. It was primarily to eliminate third party apps from the existing landscape. Reddit want money from users in one of two ways:
- Use their app and pay with your data via invasive tracking and advertising.
- Pay for a third party app that pays them for API access.
Due to the extortionate pricing, (2) was only ever hypothetical. In reality there was no sustainable model for this for any third party app, even as a non-profit.
The case around AI does exist, but it was smoke and mirrors for Reddit pulling the same nonsense that Twitter did once they realized they might get away with it, regardless of the short term damage it would do to their public image.
I think the 3rd party apps very a nice bonus but considering the timing I'm pretty sure the AI boom was the main reason.
It was more like "We need to closec the api in order to protect our profits from the use of your data"
That’s how little they got‽ Holy shit. That’s the steal of the fucking century for all that content. Reddit clearly puts the same stock in its negotiators as it does its 3rd party ecosystem. Anyone who values them more than maybe 2x this price for their IPO is a fucking idiot. Forget Trump’s Art of the Deal. spez needs to write a book.
To be fair, most of the content is written by AI's, so it's AI training AI
Getting access to the massive backlog of user data over the last 15 years for a mere 60 million. I'm glad reddit shot themselves in the foot, I'd go delete my user data from reddit, but im sure they'll be crawling the backups as well.
Any AI company who buys more then a year is dumb.
Unless they're leasing the information every year, which would essentially make their ai dependent on the data, but that data is probably the best source to use on the internet. Also, without continuously using the most current comments and posts, the ai model won't be able to give any info about current events topics and such.
I appreciate your use of the interrobang
I have a replacement action set up to change a ? and a ! to ‽. I use it at least once a week!
Great‽ ;)
Considering that the data has almost certainly been scraped already, that might have been the best that they could get for it. Or else the companies might just get it from their archives/training sets for free, like they did before.
Putting aside pretty much everything else about this announcement: That’s… shockingly cheap.
Probably because it was harvested long before they locked API. I suspect it's not a purchase but a way to legitimize the datasets already in the works since Reddit said they are now trading them. And our favorite CEO struggles to turn any profits, so he hardly had any leverage to ask for more.
It's mostly data that's publically available. It's more of a gamble I think, it's only worth anything if the government decides you need to pay for the data you use in training.
1m for every IQ point of the average Reddit user
lol dude most of us were over there for years before jumping ship and coming here
Wait
Fuck
Shhh, let's just pretend the average IQ over there dropped when we left.
Remember kids, don't delete your account. Use scripts to replace all of your posts and comments with nonesense. If there is an option in your script to feed itba "dictionary", I highly suggest using books from the public domain like "Lady Chatterley's Lover" by D. H. Lawrence. Replace all images and video links with Steam Boat Willie.
They sell all your edits as well. This does make it harder to scrap the data, inadvertently bringing up how much the data they sell is worth.
Yeah, that's the idea. Originally I went the "random characters then delete" route but realized that if I used randomized book excerpts from the public domain, the AI, or even a human, would have a very hard time figuring out what was real and what was trash. Ultimately, even if I can't modify them all, I can modify enough to make it easier for the buyer to just filter my username out in order to keep the results clean.
I do wonder how much backup data a site like Reddit keeps. I suspect their back ups are poor as the main focus is staying live and moving forward.
I'd imagine ability to revert a few days, maybe weeks but not much more than that? Would they see the value in keeping copies of every edit and a every deleted post? Would someone building the website even bother to build that functionality.
Also for reddit so much of their content is based around weblinks, which give the discussions context and meaning. I bet there are an awful lot of dead links in reddit and their moves to host their own pictures and videos was probably too late. Big hosting sites have disappeared over time or deleted content, or locked down content from AI farming.
The more I think about it, they were lucky to get $60m/year.
I did pretty much this and everything is back to the way it was.
I did it and it is still nuked. It did take a number of runs though.
Generally, what's the best/most efficient way to make LLMs go off the rail? I mean without just typing lots of gibberish and making it too obvious. As an example: I've seen people formatting their prompts with java code for like 2 lines and replies instantly went nuts.
I use a few dozen novels in a single text file and randomize which lines the script pulls. It then replaces the text three times with a random pull. What you end up with are four responses in plain English. Which is the real one? You could filter out responses edited after "the great exodus", but I have been doing this to my comments a few times per year during my twelve years on reddit.
The truth is that even if I don't get them all, I get enough that it makes it far easier for the group that bought the data to just filter my username out rather than figure out what's junk and what isn't.
Won't be long long before reddit is selling 90% AI generated content passing for human generated content!
Feels like they're already there.
Those AI companies should love fediverse then. I mean, all data here is basically open for anyone to grab. Heck, they don't even need to grab the data, just run their own instance and the federation data will flood in on its own.
Oh, don’t give them ideas please!
This was my thought exactly. Shouldn’t there be a “no_ai.txt” on the servers somehow?
That would be about as effective as robots.txt
, unfortunately.
Does this include art OC posted there being used to train art bots? If I were posting OC art I’d just delete that shit right away, not that it’ll help I suppose
Waaaay too late for that
And now those artists can’t sue like others have done. Really hope the products realize this and jump ship
I can see it now, that ai model is going to be really, really fucking angry. lol
Honestly, I can see the appeal of a model going "fuck spez" unprompted once in a while.
Shower thought: what if a large number of people made lots of posts and comments on reddit using only AI generated content?
Considering the spam problem, in a way, it sort of is already happening.
It's possible that par tof the API changes might have been to curb off that kind of behaviour before people decided to go and do just that too, or stop them using bots to wipe their profiles out.
Honestly, you just need to convince people to go through their comments and break any chains with nonsense. I bet that they are training conversational abilities (I mean what other good is the data set, it's not like redditors are experts, or when there is that the experts get upvoted at all.)
The annoying part is that the only use of "AI" I have so far, is "translating reddit post titles to understandable English". Once they train their "AI" on whatever is there, I probably won't be able to understand the "translation" anymore... Sucks. 😬
This is going to backfire when the content they are selling is used by AI to make bots to make the content that gets sold to make the AI to make bots to make the content.
This is why its so important we don't legislate against AI and make it illegal to use scraped data. All the data is already owned by someone, putting up walls only screws us out of the open source scene.
And legislate content ownership altogether. The idea that Reddit spent more than a decade growing its community just so that it could use our content as its own property is a huge issue. How do we safely and fairly communicate and express our ideas in society where the platforms that enable this automatically claim ownership of our ideas? Social media are middlemen with outsized influence.