FunkyStuff

joined 3 years ago
[–] FunkyStuff@hexbear.net 19 points 1 year ago

You realize that even in the wildest fantasies of Xi Jinping's biggest fans, every single Western capitalist country is generations away from revolution? We can do left unity because 0.000% of communism or anarchism have been built, and the early stages of building either involve the same goals: getting workers to stand in solidarity with each other instead of fighting over petty trivial nonsense like "tankies" online. There is literally 0 harm that so called "authoritarians" pose to other leftists, in fact the same leftists who clutch their pearls over "tankies" tend to be the ones defending imperialism and the imposition of western economic hegemony on the imperial periphery (as long as they get healthcare!). What is more authoritarian and useful-idiot-esque than that?

[–] FunkyStuff@hexbear.net 52 points 1 year ago (2 children)

communism made lemmy so where does that leave us

[–] FunkyStuff@hexbear.net 39 points 1 year ago (2 children)

The main thing I learned about the 20th century that made it all make sense, is that the leadership of the Western Allied countries all believe their countries fought on the wrong side of WW2. You don't need to look any further than their actions in Latin America to support that claim.

[–] FunkyStuff@hexbear.net 3 points 1 year ago (1 children)

Agreed, don't expect it to break absolutely everything but I expect that software development is going to get very hairy when you have to use whatever bloated mess AI is creating.

[–] FunkyStuff@hexbear.net 6 points 1 year ago (5 children)

It won't be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I'm pretty sure there already are ways to get LLM's to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to "spam" everywhere so to speak. These things are way dumber than we give them credit for.

[–] FunkyStuff@hexbear.net 4 points 1 year ago (2 children)

You have a pretty interesting idea that I hadn't heard elsewhere. Do you know if there's been any research to make an AI model learn that way?

In my own time while I've messed around with some ML stuff, I've heard of approaches where you try to get the model to accomplish progressively more complex tasks but in the same domain. For example, if you wanted to train a model to control an agent in a physics simulation to walk like a humanoid you'd have it learn to crawl first, like a real human. I guess for an AGI it makes sense that you would have it try to learn a model of the world across different domains like vision, or sound. Heck, since you can plug any kind of input to it you could have it process radio, infrared, whatever else. That way it could have a very complete model of the world.