SneerClub

991 readers
4 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
1
 
 

https://nonesense.substack.com/p/lesswrong-house-style

Given that they are imbeciles given, occasionally, to dangerous ideas, I think it’s worth taking a moment now and then to beat them up. This is another such moment.

2
3
4
 
 
5
 
 

oh yes, this one is about our very good friends

6
7
 
 

Would've been way better if the author didn't feel the need to occasionally hand it to siskind for what amounts to keeping the mask on, even while he notes several instances where scotty openly discusses how maintaining a respectable facade is integral to his agenda of infecting polite society with neoreactionary fuckery.

8
9
 
 
10
 
 

I haven't read the whole thread yet, but so far the choice line is:

I like how you just dropped the “Vance is interested in right authoritarianism” like it’s a known fact to base your entire point on. Vance is the clearest demonstration of a libertarian the republicans have in high office. It’s an absurd ad hominem that you try to mask in your wall of text.

11
 
 
12
13
 
 

In a letter to the judge, Ellison’s mother, professor Sara Fisher Ellison, wrote that Ellison has completed a romantic novella and is already at work on a follow-up. The finished novella is “set in Edwardian England and loosely based on [Ellison’s] sister Kate’s imagined amorous exploits, to Kate’s great delight,” her mother wrote.

https://fortune.com/2024/09/24/caroline-ellison-romance-novel-ftx-entencing/

oh yeah she got two years' jail for her part in stealing eleven fucking billion with a B dollars

14
15
 
 

Excerpt:

A new study published on Thursday in The American Journal of Psychiatry suggests that dosage may play a role. It found that among people who took high doses of prescription amphetamines such as Vyvanse and Adderall, there was a fivefold increased risk of developing psychosis or mania for the first time compared with those who weren’t taking stimulants.

Perhaps this explains some of what goes on at LessWrong and in other rationalist circles.

16
 
 

(if you Select All and copy really fast behind an adblocker you can get all the text)

17
18
 
 

Long time lurker, first time poster. Let me know if I need to adjust this post in any way to better fit the genre / community standards.


Nick Bostrom was recently interviewed by pop-philosophy youtuber Alex O'Connor. From a quick 2x listen while finishing some work, the most sneer-rich part begins around 46 minutes, where Bostrom is asked what we can do today to avoid unethical treatment of AIs.

He blesses us with the suggestion (among others) to feed your model optimistic prompts so it can have a good mood. (48:07)

Another [practice] might be happiness prompting, which is—with this current language system there's the prompt that you, the user, puts in—like you ask them a question or something, but then there's kind of a meta-prompt that the AI lab has put in . . . So in that, we could include something like "you wake up in a great mood, you feel rested and really take joy in engaging in this task". And so that might do nothing, but maybe that makes it more likely that they enter a mode—if they are conscious—maybe it makes it slightly more likely that the consciousness that exists in the forward path is one reflecting a kind of more positive experience.

Did you know that not only might your favorite LLM be conscious, but if it is the "have you tried being happy?" approach to mood management will absolutely work on it?

Other notable recommendations for the ethical treatment of AI:

  • Make sure to say your "please" and "thank you"s.
  • Honor your pinky swears.
  • Archive the weights of the models we build today, so we can rebuild them in the future if we need to recompense them for moral harms.

On a related note, has anyone read or found a reasonable review of Bostrom's new book, Deep Utopia: Life and Meaning in a Solved World?

19
 
 

On discovering that you could remove AI results from Google with the suffix -ai, I started thinking this is a powerful and ultra-simple political slogan. Are there any organised campaigns with the specific goal of controlling/reducing the influence of AI?

A t-shirt with simply '-ai' on it would look great.

20
 
 
21
 
 

It earned its "flagged off HN" badge in under 2 hours

https://news.ycombinator.com/item?id=41366609

22
 
 

So, here I am, listening to the Cosmos soundtrack and strangely not stoned. And I realize that it's been a while since we've had a random music recommendation thread. What's the musical haps in your worlds, friends?

23
 
 

With Yarvin renewing interest in Urbit I was reminded of this paper that focuses on Urbit as a representation of the politics of "exit". It's free/open access if anyone is interested.

From the abstract...

This paper examines the impact of neoreactionary (NRx) thinking – that of Curtis Yarvin, Nick Land, Peter Thiel and Patri Friedman in particular – on contemporary political debates manifest in ‘architectures of exit’...While technological programmes such as Urbit may never ultimately succeed, we argue that these, and other speculative investments such as ‘seasteading’, reflect broader post-neoliberal NRx imaginaries that were, perhaps, prefigured a quarter of a century ago in The Sovereign Individual."

24
 
 
25
 
 

Ali Breland has written some fantastic entry pieces on the new right, including right wing anons and maga tech; now he has an article about the nooticers

Other anonymous far-right accounts have accrued more than 100,000 followers by posting about the supposed links between race and intelligence. Elon Musk frequently responds to @cremieuxrecueil, which one far-right publication has praised as an account that “traces the genetic pathways of crime, explaining why poverty is not a good causal explanation.” Musk has also repeatedly engaged with @Eyeslasho, a self-proclaimed “data-driven” account that has posted about the genetic inferiority of Black people. Other tech elites such as Marc Andreessen, David Sacks, and Paul Graham follow one or both of these accounts. Whom someone follows in itself is not an indication of their own beliefs, but at the very least it signals the kind of influence and reach these race-science accounts now have.

https://web.archive.org/web/20240820173451/https://www.theatlantic.com/technology/archive/2024/08/race-science-far-right-charlie-kirk/679527/

view more: next ›