this post was submitted on 24 Jul 2023
193 points (79.3% liked)

Technology

34984 readers
247 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Not a good look for Mastodon - what can be done to automate the removal of CSAM?

you are viewing a single comment's thread
view the rest of the comments
[–] Arotrios@kbin.social 36 points 1 year ago (3 children)

This is one of the reasons I'm hesitant to start my own instance - the moderation load expands exponentially as you scale, and without some sort of automated tool to keep CSAM content from being posted in the first place, I can only see the problem increasing. I'm curious to see if anyone knows of lemmy or mastodon moderation tools that could help here.

That being said, it's worth noting that the same Standford research team reviewed Twitter and found the same dynamic in play, so this isn't a problem unique to Mastodon. The ugly thing is that Twitter has (or had) a team to deal with this, and yet:

“The investigation discovered problems with Twitter's CSAM detection mechanisms and we reported this issue to NCMEC in April, but the problem continued,” says the team. “Having no remaining Trust and Safety contacts at Twitter, we approached a third-party intermediary to arrange a briefing. Twitter was informed of the problem, and the issue appears to have been resolved as of May 20.”

Research such as this is about to become far harder—or at any rate far more expensive—following Elon Musk’s decision to start charging $42,000 per month for its previously free API. The Stanford Internet Observatory, indeed, has recently been forced to stop using the enterprise-level of the tool; the free version is said to provide read-only access, and there are concerns that researchers will be forced to delete data that was previously collected under agreement.

So going forward, such comparisons will be impossible because Twitter has locked down its API. So yes, the Fediverse has a problem, the same one Twitter has, but Twitter is actively ignoring it while reducing transparency into future moderation.

[–] redcalcium@lemmy.institute 18 points 1 year ago (1 children)

If you run your instance behind cloudlare, you can enable the CSAM scanning tool which can automatically block and report known CSAMs to authorities if they're uploaded into your server. This should reduce your risk as the instance operator.

https://developers.cloudflare.com/cache/reference/csam-scanning/

[–] Arotrios@kbin.social 5 points 1 year ago (1 children)

Sweet - thanks - that's a brilliant tool. Bookmarked.

[–] hl0dwig@g33ks.coffee 0 points 1 year ago (2 children)

@Arotrios @corb3t @redcalcium perhaps we should learn to not stand behind cloudflare at all! their proxy:
- is filtering real people,
- is blocking randomly some requests between activity pub servers ❌

the best way to deal with non solicited content is human moderation, little instance, few people, human scale... #smallWeb made of lots of little instances without any need of a big centralized proxy... 🧠

some debates: 💡 https://toot.cafe/@Coffee/109480850755446647
https://g33ks.coffee/@coffee/110519150084601332

[–] Arotrios@kbin.social 4 points 1 year ago (1 children)

Thanks for the comment - I wasn't aware of a cloudflare controversy in play, and went through your links and the associated wikipedia page. It's interesting to me, as someone who previously ran a public forum, to see them struggle with the same issues surrounding hate speech I did on a larger scale.

I agree with your thoughts on a centralized service having that much power, although Cloudfare does have a number of competitors, so I'm not quite seeing the risk here, save for the fact that Cloudfare appears to be the only one offering CSAM filtering (will have to dig in further to confirm). The ActivityPub blocking for particular instances is concerning, but I don't see a source on that - do you have more detail?

However, I disagree with your statement on handling non-solicited content - from personal experience, I can confidently state that there are some things that get submitted that you just shouldn't subject another human too, even if it's only to determine whether or not it should be deleted. CSAM falls under this category in my book. Having a system in place that keeps you and your moderators from having to deal with it is invaluable to a small instance owner.

[–] hl0dwig@g33ks.coffee 1 points 1 year ago (1 children)

@Arotrios @corb3t @redcalcium it's all about community & trust. I agree that no one should ever has to deal with such offensive content, and my answer again is: running a small instance, with few people, creating a safe space, building trust... ☮️

Of course it's a different approach about how to create and maintain an online community I guess. We don't have to deal with non solicited content here because we are 20 and we kind of know each other, subscription is only available by invitation, so you are kind of responsible for who you're bringing here... and we care about people, each of them! Again, community & trust over any tools 👌

obviously we do not share the same vision here, but it's ok, I'm not trying to convince, I just wanted to say our approach is completely different 😉

more about filtering content: https://www.devever.net/~hl/cloudflare 💡

[–] Arotrios@kbin.social 1 points 1 year ago

Thanks -that's the detail I was looking for. Definitely food for thought.

[–] p03locke@lemmy.dbzer0.com 3 points 1 year ago

I trust CloudFlare a helluva lot more than I trust most of these companies discussed on this thread. Their transparency is second to none.

[–] JayDee@lemmy.ml 9 points 1 year ago* (last edited 1 year ago) (1 children)

I think the common sense solution is creating instances for physically local communities (thus keeping the moderation overhead to a minimum) and being very judicious about which instances you federate your instance with.

That being said, It's only a matter of time before moderation tools are created for streamlining the process.

[–] ArcaneSlime@lemmy.dbzer0.com 9 points 1 year ago

My instance is for members of a certain group, had to email the owner a picture of your card to get in. More instances should exist like that. General instances are great but it's nice knowing all the people on my local are in this group too.

[–] rticks@universeodon.com 1 points 1 year ago (1 children)

@Arotrios @corb3t

They want to intimidate you with #ForTheChildren

Sounds like they succeeded

[–] Arotrios@kbin.social 2 points 1 year ago (1 children)

Nah, not intimidated. More that I ran a sizeable forum in the past and I know what what a pain in the ass this kind of content can be to deal with. That's why I was asking about automated tools to deal with it. The forum I ran got targeted by a bunch of Turkish hackers, and their one of their attack techniques involved a wave of spambot accounts trying to post crap content. I wasn't intimidated (fought them for about two years straight), but by the end of it I was exhausted to the point where it just wasn't worth it anymore. An automated CSAM filter would have made a huge difference, but this was over a decade ago and those tools weren't around.

[–] rticks@universeodon.com 0 points 1 year ago (1 children)

@Arotrios @corb3t

Totally reasonable. If (when) I create my own instance it will be very locked down re who I allow to join

[–] corb3t@lemmy.world 1 points 1 year ago

Not sure why you’re continually @ replying to me? Is discussion around activitypub content moderation an issue for you?