this post was submitted on 31 Dec 2024
1809 points (98.0% liked)

Fuck AI

1604 readers
254 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 10 months ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] Zacryon@feddit.org -2 points 6 days ago

Not a good argument. Applying a specific technology in a specific setting does not invalidate its use or power in other application settings. It also doesn't tell how good or bad an entire branch of technology is.

It's like saying "fuck tools", because someone tried to loosen a screw with a hammer.

[–] Nuke_the_whales@lemmy.world -2 points 6 days ago (7 children)

Tbh if I told half the doctors and top scientists in the world to take my burger order, or flip the patty, they'd fall apart and fuck it up. It's apples and oranges

load more comments (7 replies)
[–] Jimmycakes@lemmy.world 0 points 6 days ago (2 children)

This is BBC UK no ai is gonna be able to understand drunk uk mumbles. This shit works prefect in the US

[–] papertowels@mander.xyz 1 points 6 days ago

I was going to say, I went through a drive through, had the clearest exchange ever when placing my order, and it was only after pulling away and hearing the same voice at the same time that it was an LLM.

I'd personally take that over trying playing telephone on shitty mics and speakers any day. "Sorry, could you repeat that?" Etc etc.

load more comments (1 replies)
[–] kibiz0r@midwest.social 164 points 1 week ago (9 children)

In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.

But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

Cory Doctorow: What Kind of Bubble is AI?

[–] dance_ninja@lemmy.world 42 points 1 week ago (1 children)

AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.

[–] Frozengyro@lemmy.world 34 points 1 week ago (2 children)

Honestly anything they are used for should be validated by someone with a brain.

[–] MutilationWave@lemmy.world 2 points 6 days ago

But that's exactly what's being said. Hire one person to sign off on radiology AI doing the work of ten doctors, badly.

load more comments (1 replies)
load more comments (8 replies)
[–] Rooskie91@discuss.online 63 points 1 week ago (10 children)
[–] NaibofTabr@infosec.pub 25 points 1 week ago (2 children)

I mean... duh? The purpose of an LLM is to map words to meanings... to derive what a human intends from what they say. That's it. That's all.

It's not a logic tool or a fact regurgitator. It's a context interpretation engine.

The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

[–] vithigar@lemmy.ca 20 points 1 week ago (3 children)

Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven't told them that and they have no idea what a lamp post is. They will just produce results like the shapes you've shown them, which generally end up looking like lamp posts.

Except the "shape" in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM "understands", it just matches the shape of what's been written so far with previous patterns and extrapolates.

load more comments (3 replies)
load more comments (1 replies)
load more comments (9 replies)
[–] finitebanjo@lemmy.world 39 points 1 week ago (5 children)

You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.

These companies knew that their basic model was failing and that overfitying trashed their models.

Sam Altman and all these other fuckers knew, they've always known, that their LLMs would never function perfectly. They're convincing all the idiots on earth that they're selling an AGI prototype while they already know that it's a dead-end.

load more comments (5 replies)
[–] Imgonnatrythis@sh.itjust.works 37 points 1 week ago (1 children)

Does it rat out CEO hunters though?

[–] MiDaBa@lemmy.ml 31 points 1 week ago (3 children)

That's probably it's primary function. That and maximizing profits through charging flex pricing based on who's the biggest sucker.

load more comments (3 replies)
[–] activ8r@sh.itjust.works 36 points 1 week ago (6 children)

If I've said it once, I've said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language...

What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.

To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone "oh yeah, that's good enough. People will buy that because it looks cool". Nevermind that it's not even close to what the term "AI" implies to the average person and it's not even technically AI either so...

I don't remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it's not.

Probably preaching to the choir here though...

load more comments (6 replies)
[–] Bluefalcon@discuss.tchncs.de 35 points 1 week ago* (last edited 1 week ago) (3 children)

Bitch just takes orders and you want to make movies with it? No AI wants to work hard anymore. Always looking for a handout.

load more comments (3 replies)
[–] ch00f@lemmy.world 29 points 1 week ago (1 children)

What blows my mind about all this AI shit is that these bots are “programmed” by just telling them what to do. “You are an employee working at McDonald’s” and they take it from there.

Insanity.

[–] BradleyUffner@lemmy.world 24 points 1 week ago

Yeah, all the control systems are in-band, making them impossible to control. Users can just modify them as part of the normal conversation. It's like they didn't learn anything from phone phreaking.

[–] uberdroog@lemmy.world 19 points 1 week ago (1 children)

The automated response when you pull up to multiple placed gives me the heebie jeebies. It's nonsense no one asked for.

[–] TORFdot0@lemmy.world 1 points 6 days ago

Cheery woman’s voice- “Hi will you be using your mobile app to check in today”

Me- “no thank you”

Voice of chain smoking grizzled dude who is tired of this- “Go ahead and order”

load more comments
view more: ‹ prev next ›