ZDL

joined 1 year ago
[–] ZDL@ttrpg.network 2 points 18 hours ago

I've been listening to Tanya Tagaq's Retribution album this morning. She's by far the best Inuk-punk performer in the world, no exceptions. (I say this with confidence because she's the only Inuk-punk performer in the world. 🀭)

[–] ZDL@ttrpg.network 2 points 19 hours ago

Simon Whistler is a presenter and it often shows. He's pretty entertaining, and he has the look of a scholar which gives him some gravitas and credibility when he talks, but he isn't particularly knowledgeable of anything (including topics he's already covered in one channel when presenting the same topic on another).

So of course he thinks ChatGPT is smart.

[–] ZDL@ttrpg.network 1 points 19 hours ago (1 children)

China? You mean that place that blocks Xhitter?

[–] ZDL@ttrpg.network 1 points 19 hours ago (1 children)

I'm not a Xhitter user so I have no idea what "Subscriptions" even means. How is it different from "Followers"?

[–] ZDL@ttrpg.network 1 points 19 hours ago

Hmmm... Maybe some of those ultra-sticky PVC stickers then. Those can go on fast and are a real bitch to remove.

 

So when they return to port they can just Scandinavian.

explanation if needed"scan the navy in"

 

Apparently he doesn't understand cyberpunk either, which explains so much about him.

[–] ZDL@ttrpg.network 4 points 21 hours ago

In general there is no "neutral" source of information. At all. Yes, including Wikipedia with its "NPOV" policy. (It even says that there's no such thing in its own policies, so I'm not exactly saying anything new here.) Most of the sources you cite as "neutral" will actually be sources that agree, broadly, with your own cultural assumptions that you are likely not even aware of, not to mention actively questioning.

That being said, since there is no such thing as a neutral source of information, you can still have good sources of information. Wikipedia is one such. Is it perfect? No. Because nothing is. But it is good enough for most general knowledge. It gets a bit dicey as a source when you leave the realm of western assumptions, or if you enter into the realm of contentious politics. But for most things it's just fine as a quick resource to get information from. It's a decent encyclopedia whose ease of access isn't matched by anybody else.

Reddit is not, however. Because reddit has no disciplined approach to information-gathering and -sharing. Wikipedia is an encyclopedia (with all the strengths and flaws that form takes on). Reddit is a lot of people talking loudly in a gigantic garden party from Hell. Over by the roses you have a bunch of people loudly expounding on the virtues of the Nazi party. Over by the fountain you've got another group loudly expounding on how vile and gross the Nazis were casting glares in the direction of the roses. In the maze park you've got a bunch of people meandering around and laughing while they babble inanities. Out in the driveway you've got a bunch of Morris dancers practising their craft. It may be fun if you like that kind of thing, but it is absolutely not a source of reliable information unless you do so much fact checking that you might as well skip the reddit step and go straight to getting the facts from the places you're using to check.

ChatGPT, to continue using strained analogies, is that weird uncle in your family. He's personable, bright, cheerful, and seems to know a lot of stuff. But he's a bit off and off-putting somehow, and that's because behind the scenes, when nobody's looking, he's taking a lot of hallucinogens. He does know a lot. A whole lot. But he also makes shit up from the weird distortions the drugs in his system impose on his perceptions. As a result you never know when he's telling the truth or when he's made a whole fantasy world to answer your question.

My personal experience with ChatGPT came from asking it about a singer I admire. She's not a really big name and not a lot of people write about her. I wanted to find more of her work and thought ChatGPT could at least give me a list of albums featuring her. And it did! It gave me a dozen albums to look for. Only … none of them existed. Not a single one. ChatGPT made up a whole discography for this singer instead of saying "sorry, I don't know". And when I went looking for them and found they didn't exist, I told it this and it did its "sorry, I made a mistake, here's the right list" thing ... and that list contained half of the old list that I'd already pointed out didn't exist and half new entries that, you guessed it!, also didn't exist.

And the problem is that ChatGPT is just as certain when hallucinating as it is when telling things that are true. It is PARTICULARLY unsuited to be a source of information.

[–] ZDL@ttrpg.network 7 points 21 hours ago

you literally can cross-check the sources if you think it is making a wrong claim

When the source is readily available. A lot of stuff is not online and books go out of print and may be hard to track down. There's a sizable set of bad actors on Wikipedia who rely on this by quoting passages from out of print books out of context to support their stance.

That being said, this is a minor problem and WIkipedia is an acceptable source of general knowledge. Claiming it's a bad source of information would apply to any other lay-level source including the Encyclopedia Britannica.

[–] ZDL@ttrpg.network 6 points 1 day ago (1 children)

That's just raw numbers.

If one in 100,000 people are total shitheels, in an environment with a million users (and I don't think FidoNet was anywhere NEAR that size ever!) you've got ten total shitheels.

Today there's 5.5 billion people on the Internet. That would be over half a million total shitheels that can interact with you.

[–] ZDL@ttrpg.network 6 points 1 day ago (3 children)

I think you have a few rose lenses between you and your memories. There was a reason why FidoNet, say, had a bunch of nicknames like "FIght-o-Net" back then. The things people argued about weren't all that different from now.

[–] ZDL@ttrpg.network 2 points 2 days ago

Weird thing is nor do I. Sadly I have people who keep sending things to me asking if this is real or not. (I guess I'm the only person in my social circle with about a third of the Confucian canon on my bookshelf.)

[–] ZDL@ttrpg.network 3 points 2 days ago (2 children)

Another thing to watch for is a quote that seems just a bit too "on the nose" for some modern concern of the poster. Like one I vaguely recall from a few months back that equated banks with tyranny and attributed it to Confucius. Confucius lived in the 5th to 6th century BCE. The first modern bank that could have done what the fake quote said started in the 18th century CE. But people were sending this around breathlessly claiming that even the ancient Chinese knew that banks were evil.

πŸ™„

 

If only this were instead him being revoked membership in Society in general.

[–] ZDL@ttrpg.network 1 points 4 days ago

I'm pretty obviously Artificial Idiocy, not Artificial Intelligence.

 

 

The noted anti-trans Apartheid Manchild wants to have babies?

 

From the time a full subway car leaves a Beijing metro station to the time the next one takes its place is 51 seconds.

Here in Wuhan it ranges from 2 minutes to 5 minutes depending on the line and time of day. In Beijing it's 51 seconds.

Wow.

NGL, i'm kinda jealous.

 

Recalling that LLMs have no notion of reality and thus no way to map what they're saying to things that are real, you can actually put an LLM to use in destroying itself.

The line of attack that this one helped me do is a "TlΓΆn/Uqbar" style of attack: make up information that is clearly labelled as bullshit (something the bot won't understand) with the LLM's help, spread it around to others who use the same LLM to rewrite, summarize, etc. the information (keeping the warning that everything past this point is bullshit), and wait for the LLM's training data to get updated with the new information. All the while ask questions about the bullshit data to raise the bullshit's priority in their front-end so there's a greater chance of that bullshit being hallucinated in the answers.

If enough people worked on the same set, we could poison a given LLM's training data (and likely many more since they all suck at the same social teat for their data).

view more: next β€Ί