this post was submitted on 11 Feb 2025
99 points (97.1% liked)
Casual Conversation
2261 readers
315 users here now
Share a story, ask a question, or start a conversation about (almost) anything you desire. Maybe you'll make some friends in the process.
RULES (updated 01/22/25)
- Be respectful: no harassment, hate speech, bigotry, and/or trolling. To be concise, disrespect is defined by escalation.
- Encourage conversation in your OP. This means including heavily implicative subject matter when you can and also engaging in your thread when possible. You won't be punished for trying.
- Avoid controversial topics (politics or societal debates come to mind, though we are not saying not to talk about anything that resembles these). There's a guide in the protocol book offered as a mod model that can be used for that; it's vague until you realize it was made for things like the rule in question. At least four purple answers must apply to a "controversial" message for it to be allowed.
- Keep it clean and SFW: No illegal content or anything gross and inappropriate. A rule of thumb is if a recording of a conversation put on another platform would get someone a COPPA violation response, that exact exchange should be avoided when possible.
- No solicitation such as ads, promotional content, spam, surveys etc. The chart redirected to above applies to spam material as well, which is one of the reasons its wording is vague, as it applies to a few things. Again, a "spammy" message must be applicable to four purple answers before it's allowed.
- Respect privacy as well as truth: Don’t ask for or share any personal information or slander anyone. A rule of thumb is if something is enough info to go by that it "would be a copyright violation if the info was art" as another group put it, or that it alone can be used to narrow someone down to 150 physical humans (Dunbar's Number) or less, it's considered an excess breach of privacy. Slander is defined by intentional utilitarian misguidance at the expense (positive or negative) of a sentient entity. This often links back to or mixes with rule one, which implies, for example, that even something that is true can still amount to what slander is trying to achieve, and that will be looked down upon.
Casual conversation communities:
Related discussion-focused communities
- !actual_discussion@lemmy.ca
- !askmenover30@lemm.ee
- !dads@feddit.uk
- !letstalkaboutgames@feddit.uk
- !movies@lemm.ee
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This (Lemmy) is one of the least populated by bots places I have been on the internet in the last ten years.
Look, critical thinking is tough, and part of the reason things like this are done are explicitly to make you question reality.
It's literally a symptom of why the Trump nuts are so unhinged. Like us, they can tell something is wrong, they know they can't fully trust traditional media, for example. But the problem is they stop believing it entirely, and then they don't know what to believe so they start believing almost anything.
Please be careful to not fall down that hole of thinking. Use critical thinking and consider where you're at, what the sources are, and whether it's even worth your time to care about. Don't throw the baby out with the bathwater and stop believing in anything.
It takes effort, and it's not nice. But it's necessary. Just put on your skepticism hat while on the internet and try not to let it get to you.
Final point: Technically Lemmy isn't really experiencing growth. We're not big enough to be on the radar of people pushing this AI bullshit. Kind of like how Private Torrent Trackers stay under the radar by keeping their user numbers low. It takes a critical mass of piracy for anti-piracy measures to be taken, and private trackers just aren't big enough these days for authorities to bother with. (Pirate streaming sites are huge on the other hand, and that's where the enforcement is cracking down on lately) It's similar with the groups pushing AI. AI isn't free, it's costly and requires a lot of compute power. They aren't wasting it on no-name sites like Lemmy with a small but stable userbase. It's too costly and easier to just ignore us. It doesn't mean they aren't here at all (looking right tf at you realbitcoin.cash), there's definitely bots and astroturfers, but they're genuinely in the minority compared to real users.
https://lemmy.fediverse.observer/stats
To preface, I don't know a whole lot about AI bots. But we already see posts of the limitations of what AI can do/will allow, like bots refusing to repeat a given phrase. But what about actual critical thinking? If most bots are trained off human behavior, and most people don't run on logical arguments, doesn't that create a gap?
Not that it's impossible to program such a bot, and again, my knowledge on this is limited, but it doesn't seem like the aim of current LLMs is to apply critical thought to arguments. They can repeat what others have said, or mix words around to recreate something similar to what others have said, but are there any bots actively questioning anything?
If there are bots that question societal narratives, they risk being unpopular amongst both the ruling class and the masses that interact with them. As long as those that design and push for AI do so with an aim of gaining popular traction, they will probably act like most humans do and "not rock the boat."
If the AI we interact with were instead to push critical thinking, without applying the biases that constrain people from applying it perfectly, that'd be awesome. I'd love to see logic bots that take part in arguments on the side of reason - it's something a bot could do all day, but a human can only do for so long.
Which is why when I see a comment that argues a cogent point against a popular narrative, I am more likely to believe they are human. For now.