titotal

joined 1 year ago
[–] titotal@awful.systems 25 points 7 months ago (7 children)

Oxford instituted a fundraising freeze. They knew the org could have gotten oodles funding from any number of strange tech people, they disliked it so much they didn't care.

[–] titotal@awful.systems 17 points 7 months ago (3 children)

Fun revelations that SBF was going to try and invest in Elon buying twitter because he thought it would make money (lol), and was seriously proposing "put twitter on the blockchain" as his pitch. One of the dumbest ideas I've ever heard, right behind every other "X on blockchain" proposal

[–] titotal@awful.systems 14 points 7 months ago (1 children)

I'm sure they could have found someone in the EA ecoystem to throw them money if it weren't for the fundraising freeze. This seems like a case of Oxford killing the institute deliberately. The 2020 freeze predates the Bostrom email, this guy who was consulted by oxford said there was a dysfunctional relationship for many years.

It's not like oxford is hurting for money, they probably just decided FHI was too much of a pain to work with and hurt the oxford brand.

[–] titotal@awful.systems 7 points 7 months ago (1 children)

I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?

Or they could be on a planet with far less fossil fuels reserves, so they don't have the opportunity to kill themselves.

[–] titotal@awful.systems 22 points 7 months ago (2 children)

I feel really bad for the person behind the "notkilleveryonism" account. They've been completely taken in by AI doomerism and are clearly terrified by it. They'll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.

False doomerism is really harming people, and that sucks.

[–] titotal@awful.systems 8 points 7 months ago

Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

[–] titotal@awful.systems 20 points 7 months ago* (last edited 7 months ago) (2 children)

The future of humanity institute is the EA longtermist organisation at oxford run by swedish philosopher Nick Bostrom, who got in trouble for an old racist email and subsequent bad apology. It is the one that is rumored to be shutting down.

The Future of Life institute is the EA longtermist organisation run by swedish physicist Max Tegmarck, who got in trouble for offering to fund a neo-nazi newspaper (He didn't actually go through with it and claimed ignorance). It is the one that got the half a billion dollar windfall.

I can't imagine how you managed to conflate these two highly different institutions.

[–] titotal@awful.systems 13 points 7 months ago (5 children)

The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

[–] titotal@awful.systems 16 points 7 months ago (1 children)

This definitely reads like the tedious "april fools" posts where you can tell they are actually 90% serious but want the cover of a joke.

[–] titotal@awful.systems 14 points 8 months ago (3 children)

To be honest, I'm just kinda annoyed that he ended on the story about his mate Aaron who went on surfing trips to indonesia and gave money to his new poor village friends. The author says aaron is "accountable" to the village, but that's not true, because Aaron is a comparatively rich first world academic that can go home at any time. Is Aaron "shifting power" to the village? No, because they if they don't treat him well, he'll stop coming to the village and stop funding their water supply upgrades. And he personally benefits with praise and friendship from his purchases.

I'm sure Aaron is a fine guy, and I'm not saying he shouldn't give money to his village mates, but this is not a good model for philanthropy! I would argue that a software developer who just donates a bunch of money unconditionally to the village (via givedirectly or something) is arguably more noble than Aaron here, donating without any personal benefit or feel good surfer energy.

[–] titotal@awful.systems 13 points 8 months ago (12 children)

I enjoyed the takedowns (wow, this guy really hates Macaskill), but the overall conclusions of the article seem a bit lost. If malaria nets are like a medicine with side-effects, then the solution is not to throw away the medicine. (Giving away free nets to people probably does not have a signficant death toll!). At the end they seem to suggest, like, voluntourism as the preferred alternative? I don't think Africa needs to be flooded with dorky software engineers personally going to villages to "help out".

[–] titotal@awful.systems 10 points 8 months ago (5 children)

Apparently there's a new coding AI that is supposedly pretty good. Zvi does the writeup, and logically extrapolates what will happen for future versions, which will obviously self improve and... solve cold fusion?

James: You can just 'feel' the future. Imagine once this starts being applied to advanced research. If we get a GPT5 or GPT6 with a 130-150 IQ equivalent, combined with an agent. You're literally going to ask it to 'solve cold fusion' and walk away for 6 months.

...

Um. I. Uh. I do not think you have thought about the implications of ‘solve cold fusion’ being a thing that one can do at a computer terminal?

Yep. The recursive self improving AI will solve cold fucking fusion from a computer terminal.

 

Brain genius Beff Jezos manages to butcher both philosophy and physics at the same time!

view more: next ›