this post was submitted on 11 Feb 2025
99 points (97.1% liked)

Casual Conversation

2261 readers
315 users here now

Share a story, ask a question, or start a conversation about (almost) anything you desire. Maybe you'll make some friends in the process.


RULES (updated 01/22/25)

  1. Be respectful: no harassment, hate speech, bigotry, and/or trolling. To be concise, disrespect is defined by escalation.
  2. Encourage conversation in your OP. This means including heavily implicative subject matter when you can and also engaging in your thread when possible. You won't be punished for trying.
  3. Avoid controversial topics (politics or societal debates come to mind, though we are not saying not to talk about anything that resembles these). There's a guide in the protocol book offered as a mod model that can be used for that; it's vague until you realize it was made for things like the rule in question. At least four purple answers must apply to a "controversial" message for it to be allowed.
  4. Keep it clean and SFW: No illegal content or anything gross and inappropriate. A rule of thumb is if a recording of a conversation put on another platform would get someone a COPPA violation response, that exact exchange should be avoided when possible.
  5. No solicitation such as ads, promotional content, spam, surveys etc. The chart redirected to above applies to spam material as well, which is one of the reasons its wording is vague, as it applies to a few things. Again, a "spammy" message must be applicable to four purple answers before it's allowed.
  6. Respect privacy as well as truth: Don’t ask for or share any personal information or slander anyone. A rule of thumb is if something is enough info to go by that it "would be a copyright violation if the info was art" as another group put it, or that it alone can be used to narrow someone down to 150 physical humans (Dunbar's Number) or less, it's considered an excess breach of privacy. Slander is defined by intentional utilitarian misguidance at the expense (positive or negative) of a sentient entity. This often links back to or mixes with rule one, which implies, for example, that even something that is true can still amount to what slander is trying to achieve, and that will be looked down upon.

Casual conversation communities:

Related discussion-focused communities

founded 2 years ago
MODERATORS
 

It was all fun and games two years ago when most AI videos were obvious (6 fingers, 7 fingers, etc.).

But things are getting out of hand. I am at a point I'm questioning if Lemmy, Reddit, Youtube comments etc. are even real. I wouldn't even be suprised if I was playing Overwatch 5v5 with 9 AIs while three of them are programmed to act like kids, 4 being non toxic etc..

This whole place could just be an illusion.

I can't prove it. Its really less fun now.

The upside is I go to the gym more frequently and just hang out with people I know are 100% real. Nothing worse than having a conversation with AI person. It was just an average 7/10 like I am an average 5/10 so I thought it could be a real thing but turned out I was chatting with AI. A 7/10 AI. The creator made the person less perfect looking to make it more realistic.

Nice. What is the point of internet when everything is fake but can't even or only be identified as fake with deep research.

I'm 32 and I know many young people who also hate it. To be fair I only know people who hate on AI nowadays. This has to end.

you are viewing a single comment's thread
view the rest of the comments
[–] SnotFlickerman@lemmy.blahaj.zone 47 points 22 hours ago* (last edited 21 hours ago) (1 children)

This (Lemmy) is one of the least populated by bots places I have been on the internet in the last ten years.

Look, critical thinking is tough, and part of the reason things like this are done are explicitly to make you question reality.

It's literally a symptom of why the Trump nuts are so unhinged. Like us, they can tell something is wrong, they know they can't fully trust traditional media, for example. But the problem is they stop believing it entirely, and then they don't know what to believe so they start believing almost anything.

Please be careful to not fall down that hole of thinking. Use critical thinking and consider where you're at, what the sources are, and whether it's even worth your time to care about. Don't throw the baby out with the bathwater and stop believing in anything.

"We'll know our disinformation program is complete when everything the American public believes is false." - William J. Casey, CIA Director (1981)

It takes effort, and it's not nice. But it's necessary. Just put on your skepticism hat while on the internet and try not to let it get to you.

Final point: Technically Lemmy isn't really experiencing growth. We're not big enough to be on the radar of people pushing this AI bullshit. Kind of like how Private Torrent Trackers stay under the radar by keeping their user numbers low. It takes a critical mass of piracy for anti-piracy measures to be taken, and private trackers just aren't big enough these days for authorities to bother with. (Pirate streaming sites are huge on the other hand, and that's where the enforcement is cracking down on lately) It's similar with the groups pushing AI. AI isn't free, it's costly and requires a lot of compute power. They aren't wasting it on no-name sites like Lemmy with a small but stable userbase. It's too costly and easier to just ignore us. It doesn't mean they aren't here at all (looking right tf at you realbitcoin.cash), there's definitely bots and astroturfers, but they're genuinely in the minority compared to real users.

https://lemmy.fediverse.observer/stats

[–] Whats_your_reasoning@lemmy.world 2 points 7 hours ago* (last edited 7 hours ago)

critical thinking is tough

To preface, I don't know a whole lot about AI bots. But we already see posts of the limitations of what AI can do/will allow, like bots refusing to repeat a given phrase. But what about actual critical thinking? If most bots are trained off human behavior, and most people don't run on logical arguments, doesn't that create a gap?

Not that it's impossible to program such a bot, and again, my knowledge on this is limited, but it doesn't seem like the aim of current LLMs is to apply critical thought to arguments. They can repeat what others have said, or mix words around to recreate something similar to what others have said, but are there any bots actively questioning anything?

If there are bots that question societal narratives, they risk being unpopular amongst both the ruling class and the masses that interact with them. As long as those that design and push for AI do so with an aim of gaining popular traction, they will probably act like most humans do and "not rock the boat."

If the AI we interact with were instead to push critical thinking, without applying the biases that constrain people from applying it perfectly, that'd be awesome. I'd love to see logic bots that take part in arguments on the side of reason - it's something a bot could do all day, but a human can only do for so long.

Which is why when I see a comment that argues a cogent point against a popular narrative, I am more likely to believe they are human. For now.