this post was submitted on 07 Mar 2024
299 points (92.4% liked)

Memes

1157 readers
1 users here now

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[โ€“] a_wild_mimic_appears@lemmy.dbzer0.com 4 points 7 months ago* (last edited 7 months ago) (1 children)

I agree on the "part of AGI" thing - but it might be quite important. The sense of self is pretty interwoven with speech, and an LLM would give an AGI an "inner monologue" - or probably a "default mode network"?

if i think about how much stupid, inane stuff my inner voice produces at times... even an hallucinating or glitching LLM sounds more sophisticated than that.

[โ€“] cynar@lemmy.world 1 points 7 months ago

Interestingly, an inner monologue isn't required for conscious thought. E.g. I've got several "inner thought streams", only 1 uses language. It just happens that a lot of our early learning is language based. That trains our brain to go from language to knowledge. Hijacking that circuit for self learning is a useful method. That could create our inner monologue as a side effect.

Also, a looping LLM is more akin to an epileptic fit than an inane inner monologue. It effectively talks gibberish at itself.

Conversely, Google's Deep dream does produce dream like images. It also does it in a similar way ( we think) to how human dreams work. Stable diffusion takes this to its (current) limit.

Basically, an AI won't need to think with an inner monologue. Also, any inner monologue would be the product of interactions between subsystems and the LLM, not purely within it.