this post was submitted on 09 Aug 2023
197 points (94.6% liked)

Technology

34975 readers
98 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] FunkyStuff@hexbear.net 4 points 1 year ago (2 children)

You have a pretty interesting idea that I hadn't heard elsewhere. Do you know if there's been any research to make an AI model learn that way?

In my own time while I've messed around with some ML stuff, I've heard of approaches where you try to get the model to accomplish progressively more complex tasks but in the same domain. For example, if you wanted to train a model to control an agent in a physics simulation to walk like a humanoid you'd have it learn to crawl first, like a real human. I guess for an AGI it makes sense that you would have it try to learn a model of the world across different domains like vision, or sound. Heck, since you can plug any kind of input to it you could have it process radio, infrared, whatever else. That way it could have a very complete model of the world.

[–] GBU_28@lemm.ee 2 points 1 year ago

People are working on this model vetts model, recursive action chain type stuff yea

[–] yogthos@lemmy.ml -2 points 1 year ago

I've seen variations of this idea discussed in a few places, and there is a bunch of research happening around embodiment reinforcement training. A few prominent examples here

What you're describing is very similar to stuff people have done with genetic algorithms where the agents evolve within the environment, the last link above focuses on that approach. I definitely think this is a really promising approach because we don't have to figure out the algorithm that produces intelligence that way, but can wait for one to evolve instead.

And yeah, it's going to be really exciting to plug AI models into different kinds of sensory data, it doesn't even have to be physical world data. You could plug it into any stream of data that has temporal patterns in it, for example weather data, economic activity, or whatever and the AI will end up building a predictive model of it. I really think this is going to be the way forward for making actual AGI.