this post was submitted on 24 Feb 2025
24 points (100.0% liked)

TechTakes

1651 readers
225 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] mirrorwitch@awful.systems 6 points 16 hours ago* (last edited 16 hours ago)

small domino: Paul Graham's "Hackers and Painters" (2003)

....

big domino: "AI" "art" "realism"

[–] froztbyte@awful.systems 8 points 16 hours ago

looks like they felt that chatgpt pro wasn't losing money fast enough, you can now get sora on the pro sub

[–] blakestacey@awful.systems 12 points 23 hours ago* (last edited 23 hours ago) (1 children)

Whilst flipping through LessWrong for things to point and laugh at, I discovered that Sabine Hossenfelder is apparently talking about "AI" now.

Sabine Hossenfelder is a theoretical physicist and science communicator who provides analysis and commentary on a variety of science and technology topics.

She also provides transphobia using false balance rhetoric.

x.AI released its most recent model, Grok 3, a week ago. Grok 3 outperformed on most benchmarks

And truly, no fucks were given.

Grok 3 still features the same problems of previous LLM models, including hallucinations

The fundamental problem remains fundamental? You don't say.

[–] mii@awful.systems 2 points 5 hours ago

Oh god, Sabine "capitalism is when people buy things and academia is basically communism" Hossenfelder has opinions about AI now.

[–] blakestacey@awful.systems 11 points 1 day ago (1 children)

Be sure to pick up your copy of The War on Science, edited by ... Lawrence Krauss, featuring ... Richard Dawkins and ... Jordan Peterson.

Buchman on Bluesky wonders,

How did they not get a weinstein?

Man, I'm so glad I checked out on that whole environment and always so so sad when anything from that group escapes containment. It's such a reductive and myopic view of what science is and what people are capable of.

[–] BlueMonday1984@awful.systems 10 points 1 day ago (1 children)

New opinion piece from the Guardian: AI is ‘beating’ humans at empathy and creativity. But these games are rigged

The piece is one lengthy sneer aimed at tests trying to prove humanlike qualities in AI, with a passage at the end publicly skewering techno-optimism:

Techno-optimism is more accurately described as “human pessimism” when it assumes that the quality of our character is easily reducible to code. We can acknowledge AI as a technical achievement without mistaking its narrow abilities for the richer qualities we treasure in each other.

I feel like there's both an underlying value judgement underlying the way these studies are designed that leads to yet another example of AI experiments spitting out the exact result they were told to. This was most obvious in the second experiment described in the article about generating ideas for research. The fact that both AI and human respondents had to fit a format to hide stylistic tells suggests that those tells don't matter. Similarly these experiments are designed around the assumption that reddit posts are a meaningful illustration of empathy and that there's no value in actually sharing space and attention with another person. While I'm sure they would phrase it as trying to control for extraneous factors (i.e. to make sure that the only difference perceivable is in the level of empathy), this presupposes that style, affect, mode of communication, etc. don't actually have any value in showing empathy, creativity, or whatever, which is blatantly absurd to anyone who has actually interacted with a human person.

[–] BlueMonday1984@awful.systems 10 points 1 day ago (1 children)

New piece from Baldur Bjarnason: AI and Esoteric Fascism, which focuses heavily on our very good friends and their link to AI as a whole. Ending quote's pretty solid, so I'm dropping it here:

I believe that the current “AI” bubble is an outright Neo-Nazi project that cannot be separated from the thugs and fascists that seem to be taking over the US and indivisible from the 21st century iteration of Esoteric Neo-Nazi mysticism that is the TESCREAL bundle of ideologies.

If that is true, then there is simply no scope for fair or ethical use of these systems.

Anyways, here's my personal sidenote:

As I've mentioned a bajillion times before, I've predicted this AI bubble would kill AI as a concept, as its myriad harms and failures indelibly associate AI with glue pizzas, artists getting screwed, and other such awful things. After reading through this, its clear I've failed to take into account the political elements of this bubble, and how it'd affect things.

My main prediction hasn't changed - I still expect AI as a concept to die once this bubble bursts - but I suspect that AI as a concept will be treated as an inherently fascist concept, and any attempts to revive it will face active ridicule, if not outright hostility.

[–] corbin@awful.systems 9 points 1 day ago (1 children)

Well, how do you feel about robotics?

On one hand, I fully agree with you. AI is a rebranding of cybernetics, and both fields are fundamentally inseparable from robotics. The goal of robotics is to create artificial slaves who will labor without wages or solidarity. We're all ethically obliged to question the way that robots affect our lives.

On the other hand, machine learning (ML) isn't going anywhere. In my oversimplification of history, ML was originally developed by Markov and Shannon to make chatbots and predict the weather; we still want to predict the weather, so even a complete death of the chatbot industry won't kill ML. Similarly, some robotics and cybernetics research is still useful even when not applied to replacing humans; robotics is where we learned to apply kinematics, and cybernetics gave us the concept of a massive system that we only partially see and interact with, leading to systems theory.

Here's the kicker: at the end of the day, most people will straight-up refuse to grok that robotics is about slavery. They'll usually refuse to even examine the etymology, let alone the history of dozens of sci-fi authors exploring how robots are slaves or the reality today of robots serving humans in a variety of scenarios. They fundamentally don't see that humans are aggressively chauvinist and exceptionalist in their conception of work and labor. It's a painful and slow conversation just to get them to see the word robota.

[–] bitofhope@awful.systems 15 points 1 day ago (1 children)

Good food for thought, but a lot of that rubs me the wrong way. Slaves are people, machines are not. Slaves are capable of suffering, machines are not. Slaves are robbed of agency they would have if not enslaved, machines would not have agency either way. In a science fiction world with humanlike artificial intelligence the distinction would be more muddled, but back in this reality equivocating between robotics and slavery while ignoring these very important distinctions is just sophistry. Call it chauvinism and exceptionalism all you want, but I think the rights of a farmhand are more important than the rights of a tractor.

It's not that robotics is morally uncomplicated. Luddites had a point. Many people choose to work even in dangerous, painful, degrading or otherwise harmful jobs, because the alternative is poverty. To mechanize such work would reduce immediate harm from the nature of the work itself, but cause indirect harm if the workers are left without income. Overconsumption goes hand in hand with overproduction and automation can increase the production of things that are ultimately harmful. Mechanization has frequently lead to centralization of wealth by giving one party an insurmountable competitive advantage over its competition.

One could take the position that the desire to have work performed for the lowest cost possible is in itself immoral, but that would need some elaboration as well. It's true that automation benefits capital by removing workers' needs from the equation, but it's bad reductionism to call that its only purpose. Is the goal of PPE just to make workers complain less about injuries? I bought a dishwasher recently. Did I do it in order to not pay myself wages or have solidarity for myself when washing dishes by hand?

The etymology part is not convincing either. Would it really make a material difference if more people called them "automata" or something? Čapek chose to name the artificial humanoid workers in his play after an archaic Czech word for serfdom and it caught on. It's interesting trivia, but it's not particularly telling specifically because most people don't know the etymology of the term. The point would be a lot stronger if we called it "slavetronics" or "indenture engineering" instead of robotics. You say cybernetics is inseparable from robotics but I don't see how steering a ship is related to feudalist mode of agricultural production.

I think the central challenge of robotics from an ethical perspective is similar to AI, in that the mundane reality is less actively wrong than the idealistic fantasy. Robotics, even more than most forms of automation, is explicitly about replacing human labor with a machine, and the advantages that machine has over people are largely due to it not having moral weight. Like, you could pay a human worker the same amount of money that electricity to run a robot would cost, it would just be evil to do that. You could work your human workforce as close to 24/7 as possible outside of designated breaks for maintenance, but it would be evil to treat a person that way. At the same time, the fantasy of "hard AI" is explicitly about creating a machine that, within relevant parameters, is indistinguishable from a human being, and as the relevant parameters expand the question of whether that machine ought to be treated as a person, with the same ethical weight as a human being should become harder. If we create Data from TNG he should probably have rights, but the main reason why anyone would be willing to invest in building Data is to have someone with all the capabilities of a person but without the moral (or legal) weight. This creates a paradox of the heap; clearly there is some point at which a reproduction of human cognition deserves moral consideration, and it hasn't been (to my knowledge) conclusively been proven impossible to reach. But the current state of the field obviously doesn't have enough of an internal sense of self to merit that consideration, and I don't know exactly where that line should be drawn. If the AGI crowd took their ideas seriously this would be a point of great concern, but of course they're a derivative neofascist collection of dunces so the moral weight of a human being is basically null to begin with, neatly sidestepping this problem.

But I also think you're right that this problem is largely a result of applying ever-improved automation technologies to a dysfunctional and unjust economic system where any improvement in efficiency effectively creates a massive surplus in the labor market. This drives down the price (i.e. how well workers are treated) and contributes to the immiseration of the larger part of humanity rather than liberating them from the demands for time and energy placed on us by the need to eat food and stuff. If we can deal with the constructed system of economic and political power that surrounds this labor it could and should be liberatory.

[–] swlabr@awful.systems 9 points 1 day ago* (last edited 1 day ago) (2 children)

A few years ago, maybe a few months after moving to the bay area, a guy from my high school messaged me on linkedin. He was also in the bay, and was wanting to network, I guess? I ghosted him, because I didn’t know him at all, and when I asked my high school friends about him, he got some bad reviews. Anyway today linkedin suggests/shoves a post down my throat where he is proudly talking about working at anthropic. Glad I ghosted!

PS/E: Anthro Pic is definitely a furry term. Is that anything?

[–] BigMuffin69@awful.systems 9 points 1 day ago* (last edited 1 day ago) (1 children)

was just in a chat room with an anthropic employee and she said, "if you have a solution for x, we are hiring" and before I could even say, "why would I want to work for a cult?" she literally started saying "some people underestimate the super exponential of progress"

To which I replied, "the only super exponential I'm seeing rn is Anthropic's negative revenue." She didn't block me, so she's a good sport, but yeah, they are all kool-aid drinkers for sure.

[–] YourNetworkIsHaunted@awful.systems 8 points 1 day ago (1 children)

Super exponential progress is one thing, but what can it do to my OT levels? Is it run by one of the Enlightened Masters? Is it responsive to Auditing Tech?

[–] o7___o7@awful.systems 3 points 8 hours ago

Different company, but GPT-5 is basically OT9, innit?

[–] bitofhope@awful.systems 8 points 1 day ago

I thought about the "anthro pic" too, but it feels like a low hanging fruit since the etymological relation of anthropic and anthropomorphic (from ancient Greek ἄνθρωπος) is so obvious.

[–] self@awful.systems 12 points 2 days ago (16 children)

so Firefox now has terms of use with this text in them:

When you upload or input information through Firefox, you hereby grant us a nonexclusive, royalty-free, worldwide license to use that information to help you navigate, experience, and interact with online content as you indicate with your use of Firefox.

this is bad. it feels like the driving force behind this are the legal requirements behind Mozilla’s AI features that nobody asked for, but functionally these terms give Mozilla the rights to everything you do in Firefox effectively without limitation (because legally, the justification they give could apply to anything you do in your browser)

I haven’t taken the rebranded forks of Firefox very seriously before, but they might be worth taking a close look at now, since apparently these terms of use only apply to the use of mainline Firefox itself and not any of the rebrands

[–] mii@awful.systems 12 points 1 day ago* (last edited 1 day ago) (1 children)

The corporate dickriding over at Reddit about this is exhausting.

When you use Firefox or really any browser, you're giving it information like website addresses, form data, or uploaded files. The browser uses this information to make it easier to interact with websites and online services. That's all it is saying.

How on Earth did I use Firefox to interact with websites and services in the last 20+ years then without that permission?

Luckily the majority opinion even over there seems to be that this sucks bad, which might to be in no small part due to a lot of Firefox's remaining userbase being privacy-conscious nerds like me. So, hey, they're pissing on the boots on even more of their users and hope no one will care. And the worst part? It will probably work because anything Chromium-based is completely fucking useless now that they've gutted uBlock Origin (and even the projects that retain Manifest v2 support don't work as well as Firefox, especially when it comes to blocking YouTube ads), and most Webkit-based projects have either switched to Chromium or disappeared (RIP Midori).

[–] self@awful.systems 11 points 1 day ago (1 children)

tech apologists love to tell you the legal terms attached to the software you’re using don’t matter, then the instant the obvious happens, they immediately switch to telling you it’s your fault for not reading the legal terms they said weren’t a big deal. this post and its follow-up from the same poster are a particularly good take on this.

also:

When you use Firefox or really any browser, you’re giving it information

nobody gives a fuck about that, we’re all technically gifted enough to realize the browser receives input on interaction. the problem is Mozilla receiving my website addresses, form data, and uploaded files (and much more), and in fact getting a no-restriction license for them and their partners to do what they please with that data. that’s new, that’s what the terms of use cover, and that’s the line they crossed. don’t let anybody get that shit twisted — including the people behind one of the supposedly privacy-focused Firefox forks

[–] bitofhope@awful.systems 10 points 1 day ago

Hello, I am the the technology understander and I'm here to tell you there is no difference whatsoever between giving your information to Mozilla Firefox (a program running on your computer) and Mozilla Corporation (a for-profit company best known for its contributions to Firefox and other Mozilla projects, possibly including a number good and desirable contributions).

When you use Staples QuickStrip EasyClose Self Seal Security Tinted #10 Business Envelopes or really any envelope, you're giving it information like recipient addresses, letter contents, or included documents. The envelope uses this information to make it easier for the postal service to deliver the mail to its recipient. That's all it is saying (and by it, I mean the envelope's terms of service, which include giving Staples Inc. a carte blanche to do whatever they want with the contents of the envelopes bought from them).

[–] froztbyte@awful.systems 7 points 1 day ago

did some digging and apparently the (moz poster) it's this person. check the patents.

mega groan

[–] flizzo@awful.systems 9 points 1 day ago

related, but tonight I will pour one out for Conkeror

[–] flizzo@awful.systems 9 points 1 day ago

NGL I always wanted to use IceWeasel just to say I did, but now it might be because it's the last bastion!

[–] nightsky@awful.systems 9 points 1 day ago

Sigh. Not long ago I switched from Vivaldi back to Firefox because it has better privacy-related add-ons. Since a while ago, on one machine as a test, I've been using LibreWolf, after I went down the rabbit hole of "how do I configure Firefox for privacy, including that it doesn't send stuff to Mozilla" and was appalled how difficult that is. Now with this latest bullshit from Mozilla... guess I'll switch everything over to LibreWolf now, or go back to Vivaldi...

Really hope they'll leave Thunderbird alone with such crap...

I often wish I could just give up on web browsers entirely, but unfortunately that's not practical.

[–] fasterandworse@awful.systems 10 points 2 days ago

I hate how much firefox has been growing to this point of being the best, by a smaller and smaller margin, of a fucking shit bunch

load more comments (10 replies)
load more comments
view more: next ›