this post was submitted on 31 Aug 2023
4 points (100.0% liked)

Science

22 readers
2 users here now

This magazine is dedicated to discussions on scientific discoveries, research, and theories across various fields, including physics, chemistry, biology, astronomy, and more. Whether you are a scientist, a science enthusiast, or simply curious about the world around us, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on a wide range of scientific topics. From the latest breakthroughs to historical discoveries and ongoing research, this category covers a wide range of topics related to science.

founded 2 years ago
 

Gain insights into navigating the complexities for a more responsible and balanced AI-driven future.

you are viewing a single comment's thread
view the rest of the comments
[–] wagesj45@kbin.social 1 points 1 year ago (1 children)

Current LLM models are tools, and you have to understand how to operate a tool in order to use it effectively. Swinging a hammer at a screw doesn't work, and we'll learn how to use the tools eventually.

We also currently don't give these LLMs much structure in which to work. What you call "making shit up" I'd call imagination. Paired with other specialized systems, it will form an important part of the whole. Your brain makes shit up all the time, its just that you have other specialized structures that take that imagined thought and process it and use it for constructive ends. Every time you do a google search to fact check something, part of your mind has to imagine what it is you might be looking for so that you can then go find it. AI systems will eventually be able to do the same thing.

[–] JoBo@feddit.uk 0 points 1 year ago (1 children)

Oh stop it.

It's a high tech magic 8 ball. It can only regurgitate plausible sounding text based on what has been said before. It cannot create anything new. It doesn't understand anything. It's just a parlour trick..

[–] wagesj45@kbin.social 1 points 1 year ago (1 children)

I doubt you'd consider any particular clump of specialized neurons in your head a sentient being either, on its own. And yet when structured in particular ways, impressive things come out of the collective. I'm not talking about a single LLM achieving consciousness; that was obvious from my comment. But if you want to be a contrarian just for the sake of being a contrarian, I can't stop you.

[–] JoBo@feddit.uk 0 points 1 year ago (1 children)
[–] wagesj45@kbin.social 1 points 1 year ago (1 children)

I'm trying to make a point that these AI models are tools with specific purposes and functions. That when combined and structured in novel ways, their "cognition" will improve and come to emulate our own. You are stuck on calling LLMs, as they exist today, parlor tricks.

[–] JoBo@feddit.uk 0 points 1 year ago (1 children)

There are excellent uses of 'AI. They can be very good at doing a vast quantity of repetitive, deterministic tasks very fast. But they can't apply judgement, deal with nuance, or understand context. They're just never going to be able to do what you want them to do. The idea that they can is an illusion. An accidental illusion for sure. But an illusion all the same.

[–] wagesj45@kbin.social 1 points 1 year ago

If when properly structured and interconnected they're still an illusion, so is human intelligence. There's no magic sauce or ghost in the shell with human cognition, sorry.

And again, you ignore my argument and put another in my mouth. I'm talking about a sum of networks, not individual ones. You're arguing in bad faith, which is tiresome.