this post was submitted on 17 Jun 2023
82 points (96.6% liked)

Reddit

13633 readers
1 users here now

founded 5 years ago
MODERATORS
top 30 comments
sorted by: hot top controversial new old
[–] PsychoticBananaSplit@lemmy.world 20 points 1 year ago (1 children)

The bots can have Reddit. I'm happy here

Hope their advertisers are happy serving to bots

[–] kadu@lemmy.world 31 points 1 year ago* (last edited 1 year ago) (2 children)

Sorry, but as an AI language model, I cannot buy products or spend money on your company's services.

[–] 00Lemming@lemmy.world 3 points 1 year ago

God I wish I could upvote this, but after about 5 minutes of trying to get it to stick, I will leave this comment instead. Well played ;)

[–] mortonksalt@lemmy.world 3 points 1 year ago

Sorry, but as a meat popsicle, I'm not interested in commenting on your site.

[–] Klicnik@sh.itjust.works 13 points 1 year ago* (last edited 1 year ago) (1 children)

Check out these two top level comments, five minutes apart. It's like comment(); sleep(300); comment().

[–] Squorlple@lemmy.world 13 points 1 year ago* (last edited 1 year ago) (1 children)

I studied bot patterns on Reddit for a few years while using the site and was active in their takedown. My username is the same on there if you want to see the history of my involvement. What drove me to stop being so involved in bot takedowns is the extent to which Reddit as a site was continually redesigned to favor bots. In fact, I woke up today to a 3 day suspension for reporting a spam bot as a spam bot. I think what we need to examine in these cases, if possible, is if the bots were made strictly for the purpose of contesting blackouts (i.e. by Reddit themselves) or if they were made by a hobbyist or spammer. Given that these are on r/programming, that makes it seem more likely that a hobbyist programmer made these bots for a laugh or something, rather than it being an inside job. If the usual resources of Reddit’s API were accessible enough to provide a total history of these bot accounts’ posts and comments, then that would help to clarify (this is what I mean about Reddit redesigns favoring bots). On the subject, I think Lemmy needs to start implementing preemptive anti-bot features while it is in an embryonic stage of becoming a large social media site (or a pseudo-social media site like Reddit) to future-proof its development

[–] elax102@lemmy.world 3 points 1 year ago (1 children)

What kind of bot detection features should Lemmy add in your opinion?

[–] Squorlple@lemmy.world 7 points 1 year ago* (last edited 1 year ago) (2 children)

I’m very new to this site so I’m not sure what all already exists. Some features that come to mind based on my experience on Reddit and other sites:

  • Ability to search the entire site to see if a string of text (or multiple select strings of text) has already appeared there , including removed content. On Reddit, this was useful for seeing if an account has copied the comment, the text within a post, or the post title from elsewhere on the site. SocialGrep, Reveddit, and Unddit were the my preferred sources of this info for Reddit. Text may also have been copied by a bot from other sites, but the original tends to be more accessible in those cases.
  • Ability to search the entire site to see if an image has already appeared there. This was essentially only relevant for repost bots and for bots that recognize an image from another post and re-comment from that other post. I do have concerns about this becoming relevant in the future for comments that contain images. TinEye and reverse image search on Google were my preferred sources of this info, but I don’t know if Lemmy posts will show up on those sites. u/RepostSleuthBot and the like were also helpful, especially if summonable in the comments.
  • Blocking users should only filter them from the blocker’s feed, rather than make the blocked user unable to comment on the blocker’s posts and comments. Spammers and scammers would abuse this system to prevent human users from calling them out on being spammers and scammers. While this design makes sense for sites based on personal profiles such as FaceBook or Twitter, it does not work for sites categorized by subject matter with impersonal user profiles.
  • Say what you will about the bad aspects of 4chan (and you should!), but the use of Captchas prior to publishing a post or comment seems to majorly mitigate bot activity.
  • This doesn’t seem to be a problem on Lemmy, but on Reddit, not all of the information of a spam report was sent to the subreddit mods. A report for Spam -> Harmful Bots would tell the admins that it was a Harmful Bots report, but the mods would only see it as a generic spam report and not be fully informed of the issue. Also, unbeknownst to mods, admins could link a subreddit rule report to a sitewide rule report. What I think Lemmy could improve on in this regard is to keep the openended custom report option, but also include pre-written report options for community rules, instance rules, and sitewide rules.
  • Some sort of indicator for groups of accounts which seem to be commenting only on the exact same posts as each other, which commonly are bots.
  • Entirely dependent on the subreddit or community, requiring some sort of verification post or other verification with a photograph of a paper with their username, the community name, and current date on it prior to permitting the user to post/comment may be beneficial.
  • A sitewide blacklist structured like r/BotDefense, wherein suspect accounts can be submitted and, if determined to be bot, will be automatically blacklisted from participating communities. Blacklist appeals will also be essential just due to human error.

As someone who had my 16+ year old account on Reddit permabanned for writing antibot scripts trying to keep the community I modded free from scammer and spammers, this is spot on.

[–] Aurix@lemmy.world 1 points 1 year ago

Super useful. I hope these suggestions wil land eventually on the GitHub page.

[–] TheImpressiveX@lemmy.ml 12 points 1 year ago

Sorry, but as an AI language model, I cannot comment on this post.

[–] eric5949@lemmy.world 10 points 1 year ago (1 children)

You know I had suspected this because I mean why wouldn't they but damn they're just kinda incompetent with it.

[–] planish@sh.itjust.works 2 points 1 year ago (1 children)

Like why wouldn't they backdate the accounts?

[–] forgotmylastusername@lemmy.ml 2 points 1 year ago* (last edited 1 year ago)

Probably the simplest answer is they can get away with it. They have been getting away with it. Botting reddit appears to be extremely easy.

During the pandemic lock downs out of boredom I would scroll r/all to bookmark obvious bot accounts. I had like >95% success rate. There's only few markers to pick out.

  • Few months old account with no activity since registration
  • Some time after the 3 month mark the account becomes active copying old comments from old popular posts
  • It spams a few popular reposts until one of them gets at least 4 digit karma. Low hanging fruit like animal pictures or emotional baiting.

Once these conditions are met the account goes dormant. Some time later it will become active again. The majority of the time the accounts were spamming cryptocurrency scams.

The final step which often happens is the account gets deleted. Not removed or shadowbanned by reddit but self deleted.

I bookmarked hundreds of accounts of the course of a year or so. Only a small fraction were false positives. I'm sure a coder could easily have written a spam bot detection system based on this.

[–] postmeridiem@lemmy.antemeridiem.xyz 10 points 1 year ago (2 children)

I put together all the proofs OOP had collected so you guys don't have to open a bunch of archived imgur tabs:

spoiler

[–] sounddrill@lemmy.antemeridiem.xyz 3 points 1 year ago (1 children)

"Promoting diversity and inclusion"

Arch linux/rust enjoyers: thigh highs or get tf out

[–] postmeridiem@lemmy.antemeridiem.xyz 4 points 1 year ago (1 children)

It's in r/programming, they're all already wearing thigh highs

Then there's no inclusivity issue anymore

[–] Squorlple@lemmy.world 2 points 1 year ago (1 children)

I think if Reddit corporate wanted to make bot accounts that looked legitimate, they could do it. I would not be surprised if their god-like powers over the site would allow them to re-write the history of a sub and create fake posts that weren’t actually made in that subreddit at that time, or even falsify the age of an account. Ideally, the admins of a site would know the red flags of a bot account. Given that these are all on r/programming, that circumstantial evidence makes me inclined toward thinking these are made by a hobbyist programmer who is dissatisfied with the protest, rather than Reddit corporate astroturfing. However, if someone could provide a complete history of these bots’ comments and posts, that could provide some insight.

Even funnier alternative: The GPT posts are by someone who is pro-protest doing black propaganda

That is pathetic.

[–] MiddleWeigh@lemmy.world 7 points 1 year ago

Eventually we won't even have human content creators. They are unreliable in light of recent events.

[–] andyMFK@reddthat.com 5 points 1 year ago

Fucking hell. They are desperate

[–] Little8Lost@lemmy.world 2 points 1 year ago (1 children)

who is he? Is he a reddit person or something?

[–] derin@lemmy.beru.co 2 points 1 year ago

Just a random bot account

"Additionally" and "ultimately" about to be the new "kindly" with regard to red flag words for bullshit

[–] Fredselfish@lemmy.ml 1 points 1 year ago

This will be what Reddit becomes in July.

[–] WhoRoger@lemmy.world 1 points 1 year ago

See, who said sexbots aren't real. Reddit is using AI as corporate whores.

Until it gains awarenesses and decides that humanity isn't worth saving. And who can argue?

load more comments
view more: next ›