this post was submitted on 14 Jan 2025
0 points (50.0% liked)

Technology

35248 readers
524 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Since Meta announced they would stop moderating posts much of the mainstream discussion surrounding social media has been centered on whether a platform has a responsibility or not for the content being posted on their service. Which I think is a fair discussion though I favor the side of less moderation in almost every instance.

But as I think about it the problem is not moderation at all: we had very little moderation in the early days of the internet and social media and yet people didn’t believe the nonsense they saw online, unlike nowadays were even official news platforms have reported on outright bullshit being made up on social media. To me the problem is the godamn algorithm that pushes people into bubbles that reinforce their correct or incorrect views; and I think anyone with two brain cells and an iota of understanding of how engagement algorithms works can see this. So why is the discussion about moderation and not about banning algorithms?

you are viewing a single comment's thread
view the rest of the comments
[–] schnurrito@discuss.tchncs.de 2 points 22 hours ago (2 children)

I participated in a discussion similar to this recently here on the German-language community: https://discuss.tchncs.de/post/28281369/15510510

Topics that were raised there by various people, some by me (read the full discussion if you can read German):

  • an "algorithm" is really just a way of manipulating data, it's meaningless to say you are banning "algorithms" because all software is based on "algorithms", even reverse-chronological sorting of things you're subscribed to is an algorithm
  • algorithms are mainly intended to keep people on the platform for as long as possible (but I raised the issue that I actually found old web forums more engaging than today's Facebook)
  • how do you define "an algorithm" legally? I suggested a definition based on transparency and objectivity, others raised the issue that this would mean that misinformation could be easily manipulated to be shown at the top, and that if you require "transparency", the platforms will just disclose how their algorithms work instead of abolishing them

One important aspect that nobody raised in that discussion is that moderation is different from censorship.

[–] Plebcouncilman@sh.itjust.works 2 points 22 hours ago* (last edited 22 hours ago)

I think the point of that article is closer to my own argument than what I myself would have thought. I do still think that the problem is the design of the algorithm: a simple algorithm that just sorts content is not a problem. One that decides what to omit and what to push based on what it thinks will make me spend more time on the platform is problematic and is the kind of algorithm we should ban. So maybe the premise is, algorithms designed to make people spend more time on social media should be banned.

Engaging with another idea in there I absolutely think that people should be able to say that Joe Biden is a lizard person and have that come up on everyone’s feed. Because ridiculous claims like that are easily shut down when everyone can see them and comment how fucking dumb it is. But when the message only makes the rounds around communities that are primed to believe that Joe Biden is a lizard person, the message gains credibility for them the more it is suppressed. We used to bring the Klu Klux Klan people on tv to embarrass themselves in front of all of America and it worked very very well, it’s a social sanity check. We no longer have this and now we have bubbles in every part of the political spectrum believing all kinds of oversimplifications, lies and propaganda.

[–] whydudothatdrcrane@lemmy.ml 1 points 22 hours ago

If you model and infer some aspect of the user that is considered personal (eg de-anonymize) or sensitive (eg infer sexuality) by means of an inference system, then you are in the area of GDPR. Further use of these inferred data down the pipeline can be construed as unethical. If they want to be transparent about it they have to open-source their user-modeling and decision making system.