SneerClub

991 readers
14 users here now

Hurling ordure at the TREACLES, especially those closely related to LessWrong.

AI-Industrial-Complex grift is fine as long as it sufficiently relates to the AI doom from the TREACLES. (Though TechTakes may be more suitable.)

This is sneer club, not debate club. Unless it's amusing debate.

[Especially don't debate the race scientists, if any sneak in - we ban and delete them as unsuitable for the server.]

founded 1 year ago
MODERATORS
176
177
 
 

In today's episode, Yud tries to predict the future of computer science.

178
179
180
 
 

The Future of Sovereign AI

We still don’t know just how important and disruptive artificial intelligence will be, but one thing seems clear: the power of AI should not remained cordoned off by centralized companies. Our panelists—Cody Wilson of Defense Distributed, Native Planet’s ~mopfel-winrux, Tlon’s Lukas Buhler, along with @mogmachine from Bittensor and David Capone from Harmless AI—are the perfect team to explore the possibilities unlocked by more sovereign, decentralized, and open AI.

[A bitcoiner, an ancap, a 3-D gun printer, an alt-righter, the founder of Hatreon and a convicted kiddle fucker walk into a bar. The barman picks up a baseball bat and says "get the fuck out of my bar, Cody."]

Cancelling the Culture Industry

In a world of moral totalitarianism, sometimes freedom looks like a short story about sex tourism in the Philippines. In this panel, author Sam Frank hosts MRB editor in chief Noah Kumin, romance writer Delicious Tacos, sex detective Magdalene Taylor and frog champion Lomez of Passage Press. Join them for a freewheeling discussion of saying whatever they want while evading the digital hall monitors.#

[not being able to live within five hundred feet of a school is a small price to pay for true freedom]

Securing Urbit

How do we make Urbit secure? And what does a secure Urbit look like? The great promise of Urbit has always been that it can provide a sovereign computing platform for the individual—a means by which to do everything you would want to do on a computer without giving up your data. For that dream to be fulfilled, Urbit should be as secure as your crypto hardware wallet—perhaps moreso. Moderated by Rikard Hjort, Urbit experts Logan Allen, and Joe Bryan discuss with Urbit fan and cybersecurity expert Ryan Lackey.

[as secure as a crypto hardware wallet, you say]

Rebooting the Arts

The culture war is over—Culture lost. Now it’s a race to build a new one. Media whisperer Ryan Lambert leads a conversation with Play Nice founder/impresario Hadrian Belove. trend forecaster Sean Monahan, and controversial art-doc collective Kirac. They discuss how to win the culture race, and create a new arts ecosystem out of the rubble.

[the answer is to get Peter Thiel to try to magic up Dimes Square out of nothing, isn't it?]

How to Fund a New World

Cosimo de Medici persuaded Benvenuto Cellini, the Florentine sculptor, to enter his service by writing him a letter which concluded, 'Come, I will choke you with gold.' Join UF Director of Markets Andrew Kim as he discusses how to get more gold onto Urbit with Jake Brukhman of Coinfund, Jae Yang of Tacen, @BacktheBunny from RabbitX and Evan Fisher of Portal VC.

[the answer's still Thiel, isn't it?]

181
 
 

According to wikipedia, Runaway received "mixed reviews".

182
 
 

Some light sneerclub content in these dark times.

Eliezer complements Musk on the creation of community notes. (A project which predates the takeover of twitter by a couple of years (see the join date: https://twitter.com/CommunityNotes )).

In reaction Musk admits he never read HPMOR and he suggests a watered down Turing test involving HPMOR.

Eliezer invents HPMOR wireheads in reaction to this.

183
 
 

I'm seeing a SBF-like trajectory for Altman here. He's building the foundation of his public persona and business on a house of cards that will come tumbling down at some point.

The only things that are sneer-worthy are the comments from LWers, like Roko, who jump to immediate dismissal of what seems like pretty compelling testimony and evidence towards fucked up things Altman did and continues to do to his own family.

To stick with the theme of this group, here's a sneer coming from inside the house in response to Roko's dismissal that was primarily based on his own feels:

Bayes can judge you now: your analysis is half-arsed, which is not a good look when discussing a matter as serious as this. All you’ve done is provide one misleading statistic.

I didn't post this to sneer, however. I think it's pretty important information that should be known.

edit: I should also mention that Annie has a Twitter account that she's posted some good takes, sneers and zingers on. I think she's worth a follow and can use some support, and she has some projects that can also use support.

184
 
 

Rationalist check-list:

  1. Incorrect use of analogy? Check.
  2. Pseudoscientific nonsense used to make your point seem more profound? Check.
  3. Tortured use of probability estimates? Check.
  4. Over-long description of a point that could just have easily been made in 1 sentence? Check.

This email by SBF is basically one big malapropism.

185
 
 

original is here, but you aren't missing any context, that's the twit.

I could go on and on about the failings of Shakespear... but really I shouldn't need to: the Bayesian priors are pretty damning. About half the people born since 1600 have been born in the past 100 years, but it gets much worse that that. When Shakespear wrote almost all Europeans were busy farming, and very few people attended university; few people were even literate -- probably as low as ten million people. By contrast there are now upwards of a billion literate people in the Western sphere. What are the odds that the greatest writer would have been born in 1564? The Bayesian priors aren't very favorable.

edited to add this seems to be an excerpt from the fawning book the big short/moneyball guy wrote about him that was recently released.

186
 
 

Interview on Australian anti-fascist radio show Yeah Nah Pasaran! with Dan McQuillan, computer lecturer and author of "Resisting AI: An Anti-fascist Approach To Artificial Intelligence". Interesting comments on AI as a tool of/for austerity politics, and an argument that AI is inherently anti-worker. Probably nothing new to people in here, but eloquently stated and put into a wider political context.

187
 
 

Nitter link

With interspaced sneerious rephrasing:

In the close vicinity of sorta-maybe-human-level general-ish AI, there may not be any sharp border between levels of increasing generality, or any objectively correct place to call it AGI. Any process is continuous if you zoom in close enough.

The profound mysteries of reality carving, means I get to move the goalposts as much as I want. Besides I need to re-iterate now that the foompocalypse is imminent!

Unless, empirically, somewhere along the line there's a cascade of related abilities snowballing. In which case we will then say, post facto, that there's a jump to hyperspace which happens at that point; and we'll probably call that "the threshold of AGI", after the fact.

I can't prove this, but it's the central tenet of my faith, we will recognize the face of god when we see it. I regret that our hindsight 20-20 event is so ~~conveniently~~ inconveniently placed in the future, the bad one no less.

Theory doesn't predict-with-certainty that any such jump happens for AIs short of superhuman.

See how much authority I have, it is not "My Theory" it is "The Theory", I have stared into the abyss and it peered back and marked me as its prophet.

If you zoom out on an evolutionary scale, that sort of capability jump empirically happened with humans--suddenly popping out writing and shortly after spaceships, in a tiny fragment of evolutionary time, without much further scaling of their brains.

The forward arrow of Progress™ is inevitable! S-curves don't exist! The y-axis is practically infinite!
We should extrapolate only from the past (eugenically scaled certainly) century!
Almost 10 000 years of written history, and millions of years of unwritten history for the human family counts for nothing!

I don't know a theoretically inevitable reason to predict certainly that some sharp jump like that happens with LLM scaling at a point before the world ends. There obviously could be a cascade like that for all I currently know; and there could also be a theoretical insight which would make that prediction obviously necessary. It's just that I don't have any such knowledge myself.

I know the AI god is a NeCeSSarY outcome, I'm not sure where to plant the goalposts for LLM's and still be taken seriously. See how humble I am for admitting fallibility on this specific topic.

Absent that sort of human-style sudden capability jump, we may instead see an increasingly complicated debate about "how general is the latest AI exactly" and then "is this AI as general as a human yet", which--if all hell doesn't break loose at some earlier point--softly shifts over to "is this AI smarter and more general than the average human". The world didn't end when John von Neumann came along--albeit only one of him, running at a human speed.

Let me vaguely echo some of my beliefs:

  • History is driven by great men (of which I must be, but cannot so openly say), see our dearest elevated and canonized von Neumann.
  • JvN was so much above the average plebeian man (IQ and eugenics good?) and the AI god will be greater.
  • The greatest single entity/man will be the epitome of Intelligence™, breaking the wheel of history.

There isn't any objective fact about whether or not GPT-4 is a dumber-than-human "Artificial General Intelligence"; just a question of where you draw an arbitrary line about using the word "AGI". Albeit that itself is a drastically different state of affairs than in 2018, when there was no reasonable doubt that no publicly known program on the planet was worthy of being called an Artificial General Intelligence.

No no no, General (or Super) Intelligence is not an completely un-scoped metric. Again it is merely a fuzzy boundary where I will be able to arbitrarily move the goalposts while being able to claim my opponents are!

We're now in the era where whether or not you call the current best stuff "AGI" is a question of definitions and taste. The world may or may not end abruptly before we reach a phase where only the evidence-oblivious are refusing to call publicly-demonstrated models "AGI".

Purity-testing ahoy, you will be instructed to say shibboleth three times and present your Asherah poles for inspection. Do these mean unbelievers not see these N-rays as I do ? What do you mean we have (or almost have, I don't want to be too easily dismissed) is not evidence of sparks of intelligence?

All of this is to say that you should probably ignore attempts to say (or deniably hint) "We achieved AGI!" about the next round of capability gains.

Wasn't Sam the Altman so recently cheeky? He'll ruin my grift!

I model that this is partially trying to grab hype, and mostly trying to pull a false fire alarm in hopes of replacing hostile legislation with confusion. After all, if current tech is already "AGI", future tech couldn't be any worse or more dangerous than that, right? Why, there doesn't even exist any coherent concern you could talk about, once the word "AGI" only refers to things that you're already doing!

Again I reserve the right to remain arbitrarily alarmist to maintain my doom cult.

Pulling the AGI alarm could be appropriate if a research group saw a sudden cascade of sharply increased capabilities feeding into each other, whose result was unmistakeably human-general to anyone with eyes.

Observing intelligence is famously something eyes are SufFicIent for! No this is not my implied racist, judge someone by the color of their skin, values seeping through.

If that hasn't happened, though, deniably crying "AGI!" should be most obviously interpreted as enemy action to promote confusion; under the cover of selfishly grabbing for hype; as carried out based on carefully blind political instincts that wordlessly notice the benefit to themselves of their 'jokes' or 'choice of terminology' without there being allowed to be a conscious plan about that.

See Unbelievers! I can also detect the currents of misleading hype, I am no buffoon, only these hypesters are not undermining your concerns, they are undermining mine: namely damaging our ability to appear serious and recruit new cult members.

188
189
 
 

“ We have unusually strong marketing connections; Vitalik approves of us; Aella is a marketing advisor on this project; SlateStarCodex is well aware of us. We are quite networked in the Effective Altruism space. We could plausibly get an Elon tweet. ”

From the short investor spiel document. Also they want to just bypass the FDA?

190
191
 
 

I don’t think I posted this before, but if I did lemme know.

https://archive.ph/bVUba

192
 
 

Caught the bit on lesswrong and figured you guys might like.

193
 
 

source nitter link

@EY
This advice won't be for everyone, but: anytime you're tempted to say "I was traumatized by X", try reframing this in your internal dialogue as "After X, my brain incorrectly learned that Y".

I have to admit, for a brief moment i thought he was correctly expressing displeasure at twitter.

@EY
This is of course a dangerous sort of tweet, but I predict that including variables into it will keep out the worst of the online riff-raff - the would-be bullies will correctly predict that their audiences' eyes would glaze over on reading a QT with variables.

Fool! This bully (is it weird to speak in the third person ?) thinks using variables here makes it MORE sneer worthy, especially since this appear to be a general advice, but i would struggle to think of a single instance in my life where it's been applicable.

194
 
 

(whatever the poster looks like and wherever they live, their personality is a scrawny nerd in a basement)

195
 
 
  • original post detailing mistreatment of employees
  • meta post about how a good rationalist should correctly epistemically assess the fairness of the post cataloguing and confirming the bad behaviour

tl;dr these fucking guys

196
 
 

Choice quote:

Putting “ACAB” on my Tinder profile was an effective signaling move that dramatically improved my chances of matching with the tattooed and pierced cuties I was chasing.

197
198
 
 

This is a slightly emotional response off the back of a discussion with a heavily TESCREAList family member recently. Which concluded with his belief there are a very small number of humans with incredible information processing abilities that know the real truth about humanity's future. He knows I hate Yudkowsky, I know he considers him one of the most important voices of our time. It's not fun listening to someone I love and value heading into borderline scientology territory. I kind of feel like, just with Peterson a few years ago, this is the next post-truth battle on our hands.

199
 
 

this btw is why we now see some of the TPOT rationalists microdosing street meth as a substitute. also that they're idiots, of course.

somehow this man still has a medical license

200
view more: ‹ prev next ›