this post was submitted on 27 Jan 2025
209 points (90.0% liked)

196

17057 readers
766 users here now

Be sure to follow the rule before you head out.


Rule: You must post before you leave.



Other rules

Behavior rules:

Posting rules:

NSFW: NSFW content is permitted but it must be tagged and have content warnings. Anything that doesn't adhere to this will be removed. Content warnings should be added like: [penis], [explicit description of sex]. Non-sexualized breasts of any gender are not considered inappropriate and therefore do not need to be blurred/tagged.

If you have any questions, feel free to contact us on our matrix channel or email.

Other 196's:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Yerbouti@sh.itjust.works 64 points 1 month ago* (last edited 1 month ago) (2 children)

I asked it about human rights in China in the browser version. It actually wrote a fully detailed answer, explaining that it is reasonable to conclude that China violates human rights, and the reply disappear right in my face while I was reading. I manage to repeat that and record my screen. The interesting thing to know is that this wont happened if you run it locally, I've just tried it and the answer wasn't censored.

[–] drivepiler@lemmy.world 21 points 1 month ago (1 children)

I asked it about "CCP controversies" in the app and it did the exact same thing twice. Fully detailed answer removed after about 1 second when it finished.

[–] sushibowl@feddit.nl 15 points 1 month ago (1 children)

Most likely there is a separate censor LLM watching the model output. When it detects something that needs to be censored it will zap the output away and stop further processing. So at first you can actually see the answer because the censor model is still "thinking."

When you download the model and run it locally it has no such censorship.

[–] Swedneck@discuss.tchncs.de 2 points 1 month ago

what i don't understand is why they won't just delay showing the answer for a while to prevent this, sure that's a bit annoying for the user but uhhhhh... it's slightly more jarring to see an answer getting deleted like the llm is being shot in the head for saying the wrong thing..