this post was submitted on 15 May 2024
657 points (100.0% liked)

TechTakes

1441 readers
34 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] EatATaco@lemm.ee 3 points 6 months ago (3 children)

Why is that a criticism? This is how it works for humans too: we study, we learn the stuff, and then try to recall it during tests. We've been trained on the data too, for neither a human nor an ai would be able to do well on the test without learning it first.

This is part of what makes ai so "scary" that it can basically know so much.

[–] Soyweiser@awful.systems 23 points 6 months ago (2 children)

Dont anthropomorphise. There is quite the difference between a human and an advanced lookuptable.

[–] mawhrin@awful.systems 18 points 6 months ago (3 children)

LLMs know nothing. literally. they cannot.

[–] Amoeba_Girl@awful.systems 15 points 6 months ago (1 children)

Yeah but neither did Socrates

[–] dgerard@awful.systems 17 points 6 months ago

but he at least was smug about it

[–] exanime@lemmy.today 11 points 6 months ago (2 children)

Because a machine that "forgets" stuff it reads seems rather useless... considering it was a multiple choice style exam and, as a machine, Chat GPT had the book entirely memorized, it should have scored perfect almost all the time.

[–] EatATaco@lemm.ee -1 points 6 months ago (1 children)

Chat GPT had the book entirely memorized

I feel like this exposes a fundamental misunderstanding of how LLMs are trained.

[–] TachyonTele@lemm.ee 12 points 6 months ago

They're auto complete machines. All they fundamentally do is match words together. If it was trained on the answers and still couldn't reproduce the correct word matches, it failed.