localhost

joined 1 year ago
[–] localhost@beehaw.org 1 points 2 days ago* (last edited 2 days ago)

Was this ever a thing? I have never seen or heard anyone use "gen AI" to mean AGI. In fact I can't even find one instance of such usage.

[–] localhost@beehaw.org 4 points 2 days ago (1 children)

Deep learning has always been classified as AI. Some consider pathfinding algorithms to be AI. AI is a broad category.

AGI is the acronym you're looking for.

[–] localhost@beehaw.org 8 points 3 days ago* (last edited 3 days ago)

This feels to me like the LLM misinterpreted it as some kind of fictional villain talk and started to autocomplete it.

Could also be the model simply breaking. There was a time when Sydney (Bing AI or whatever they call it now) had to be constrained to 10 messages per context and having some sort of supervisor on top of itself because it would occasionally throw a fit or start threatening the user for no reason.

[–] localhost@beehaw.org 2 points 1 month ago

Oh damn, you're right, my bad. I got a new notification but didn't check the date of the comment. Sorry about that.

[–] localhost@beehaw.org 1 points 1 month ago (2 children)

That's a 1 month old thread my man :P

But sounds interesting, I haven't heard of Dysrationalia before. Quick cursory search shows that it's a term that has been coined mostly by a single psychologist in his book. I've been able to find only one study that used the term and it found that "different aspects of rational thought (i.e. rational thinking abilities and cognitive styles) and self-control, but not intelligence, significantly predicted the endorsement of epistemically suspect beliefs."

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6396694/

All in all, this seems to me more like a niche concept used by a handful of psychologists rather than something widely accepted in the field. Do you have anything that I could read to familiarize myself with this more? Preferably something evidence-based because we can ponder on non-verifiable explanations all day and not get anywhere.

[–] localhost@beehaw.org 2 points 3 months ago (5 children)

The author's suggesting that smart people are more likely to fall for cons that they try to dissect but can't find the specific method being used, supposedly because they consider themselves to be infallible.

I disagree with this take. I don't see how that thought process is exclusive to people who are or consider themselves to be smart. I think the author is tying himself into a knot to state that smart people are actually the dumb ones, likely in preparation to drop an opinion that most experts in the field will disagree with.

[–] localhost@beehaw.org 22 points 4 months ago

The paracausal tarrasque seems like a genuinely interesting concept. Gives me False Hydra vibes

[–] localhost@beehaw.org 7 points 4 months ago (2 children)

Both threads appeared on my feed near one another and I figured it was on topic given that the other one is directly referenced in the main post here. If OP can reference another post to complain about hate, I think it's fair game for me to truthfully add that their conduct in the very same thread was also excessively hateful - how else are we to discuss the main subject of this post at all otherwise?

[–] localhost@beehaw.org 7 points 4 months ago (1 children)

I have read the blog post that you've linked, which is full of exaggeration.

The developer rejected PR that changed documentation to use one instance of they/them instead of he/him, responded "This project is not an appropriate arena to advertise your personal politics.", and then promptly got brigaded. Similar PRs were appearing and getting closed from time to time.

A satirical PR has been opened and closed for being spam - despite the blogger's commentary, it's abundantly clear that the developer didn't call the person opening the PR a "spam" (what would that even mean?).

The project also had code of conduct modified, probably due to the brigading, to essentially include the aforementioned "not an appropriate arena to advertise your personal politics or religious beliefs" line - I don't know what part of this is for the blogger a "white supremacist" language.

From what I can tell, this is all they've done. No racism, no sexism, no white supremacy. Would it be better if they just accepted the PR? Yes. Does it make the developer part of one of the worst groups of people that ever existed? No.

[–] localhost@beehaw.org 10 points 4 months ago (3 children)

When I created an account here, I thought Beehaw is specifically a platform where throwing vitriol unnecessarily is discouraged.

Non-native speaker being stubborn about not using "they/them" in gender-neutral contexts (especially when most if not all of these weren't even about people) is not enough to label them as neither incel, transphobe, nor racist.

Intentionally mischaracterizing other human beings and calling them derogatory names that they don't deserve is, in my opinion, against the spirit of the platform.

[–] localhost@beehaw.org 12 points 4 months ago (9 children)

The most recent example I’ve noticed is around the stuff with the Ladybird devs being weird about being asked to use inclusive pronouns, but it seems like a pattern.

You mean the thread where you out of nowhere called the maintainers "incels, transphobes, and racists" over singular instance of them using "he/him" as a gender-neutral pronouns in documentation and refusing to change it?

[–] localhost@beehaw.org 2 points 5 months ago (1 children)

Have you tried Cosmoteer? It's a pretty satisfying shipbuilder with resource and crew management, trading, and quests. Similar vibe to Reassembly.

view more: next ›