this post was submitted on 06 Aug 2023
266 points (92.4% liked)

Technology

59696 readers
2564 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Yes! This is a brilliant explanation of why language use is not the same as intelligence, and why LLMs like chatGPT are not intelligence. At all.

you are viewing a single comment's thread
view the rest of the comments
[–] Spzi@lemm.ee 4 points 1 year ago (1 children)

I can mostly follow, just want to exclude the last paragraph which contains assumptions about a black box.

That being said, how is the human brain different from what you describe?

[–] fidodo@lemm.ee -1 points 1 year ago (1 children)

You think by processing the probabilistic association between word sequences? Humans think through world models, we have imagination, a physical and metaphysical simulation of the world around us. Absolutely none of that is involved in how LLMs work. There's a lot to be said about the utility of association of knowledge embedded in symbols, and having a magic book that can retrieve pre existing information in context is incredibly useful and I think it will have an impact on the level of the printing press and the internet, but just because it's incredibly useful at retrieving knowledge doesn't mean it works anything like how a human brain works.

[–] Spzi@lemm.ee 4 points 1 year ago (1 children)

Sorry, I could have been more clear. I did not mean to equate current LLMs with human brains. The question was rather:

Can't we describe the working of (other) human brains in a very similar fashion as you did before? Or where exactly is the difference which sets us apart?

world models, we have imagination, a physical and metaphysical simulation of the world around us

AIs which can and need to interact with the physical world have those, too. Naturally, an AI which is restricted to language has much less necessity and opportunity to develop these, much like our brain area for smell is probably not so good at estimating velocities and catching a ball.

I think your approach of demystifying technology is valid and worthwhile. I'm just not sure if it does what you maybe think it does; highlight the difference to our intelligence.

[–] fidodo@lemm.ee 1 points 1 year ago

We know the math and the mechanisms of how LLMs work. The only thing we don't understand is the significance and capabilities of the probabilistic associations it prescribes to symbol sequences.

While we don't know how a human brain works in detail, we do know how a human brain tackles problem solving because we're sentient beings and we can be introspective about how we think through a problem.

We can look at how vectors flow through a neutral network (remember, LLMs don't even have a concept of words, it transforms tokens into vectors that it then builds mathematical associations between, it's all numbers) and we can see through the data that there's nothing resembling a world simulation in how it actually works.

Also keep in mind that the LLMs you interact with don't even learn from your interactions. The data is all baked in at training time. If you turn the temperature of the LLM output generation to zero it will come up with the same probability answer every time. The more you learn about how they work under the hood, it becomes more and more clear that there is no there there when it comes to sentience.

I will say that I do think that the capabilities and significance of symbol association and pattern matching has been wildly under estimated. Word sequences need to follow a pattern to make sense, and if you stumble upon the right sequence of words, that sequence of words could be incredibly impactful and it doesn't really matter how you come up with them. If you were to pull words out of a hat at random, there's an infinity small possibility that you'll get a sequence of words that happen to expose the secrets of the universe. LLMs improve on that immensely on that they use probability to reduce that sequence space to the set of word sequences that make sense, and in that reduced space are generative sequences that may produce real value, and we can improve on making that space more and more relevant and useful.