this post was submitted on 07 Jan 2025
104 points (82.1% liked)

Technology

60316 readers
3132 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] kibiz0r@midwest.social 52 points 1 day ago (2 children)

Tim Harford mentioned this in his 2016 book “Messy”.

They just wanna call it AI and make it sound like some mysterious intelligence we can’t comprehend.

[–] frezik@midwest.social 3 points 1 hour ago* (last edited 1 hour ago)

It sorta is.

A key way that human intelligence works is to break a problem down into smaller components that can be solved individually. This is in part due to the limited computational ability of the human brain; there's not enough there to tackle the complete problem.

However, there's no particular reason AI would need to be limited that way, and it often isn't. Expert Go players see this in AI for that game. The AI tends to make all sorts of moves early on that don't seem to be following the usual logic, and it's because it's laid out the complete game in its "head" and going directly for the goal. Go is basically impossible for humans to win against the best AIs at this point.

This is a different kind of intelligence than we're used to, but there's no reason to discount it as invalid.

See the paper Understanding Human Intelligence through Human Limitations

[–] rottingleaf@lemmy.world 2 points 9 hours ago (1 children)

Except we can't build what we can't comprehend that also works.

The problem here is that people with power to direct funds are, more often than not, utterly ignorant in building anything.

I think where all this is generally directed is a society, like in Asimov's Foundation or Plato's Republic (with additional step), where people competent in building something are reduced to a small caste, most of them with local, not professional, competencies, like priests, and with a techno-religion centered on that "AI". This is a hierarchical structure very vulnerable to, well, that kind of powerful people.

The majority will work non-essential jobs (like in Heinlein's Door Into Summer), which do not give them any kind of power, the soldier caste will work the military, and the builder caste will work the technology, and the philosopher caste will be those powerful people. The difference with Plato is in having that first group of people which does not fit into any main caste. By Plato they would all be builder (worker) caste, but that would create a problem with the attempt to make it a religion and a hierarchical monopolized structure. The builder caste should be small.

You might see a whole lot of problems with that idea (which still seems to be attempted), that's because the people from whom it comes don't understand how civilization works and that instruments change the rules constantly, not just to the point they can understand.

Recommend reading: Jodorowsky’s Technopriests

[–] clucose@lemmy.ml 84 points 1 day ago (1 children)

It is possible for AI to hallucinate elements that don't work, at least for now. This requires some level of human oversight.

So, the same as LLMs and they got lucky.

[–] ATDA@lemmy.world 4 points 11 hours ago (1 children)

It's like putting a million monkeys in a writers' room, but super charged on meth and consuming insane resources.

[–] john89@lemmy.ca -3 points 4 hours ago (2 children)

That monkey analogy is so far removed from reality, I think less of anyone who perpetuates it.

A room full of monkeys banging on keyboards will always generate gibberish, because they're fucking monkeys.

[–] surewhynotlem@lemmy.world 3 points 3 hours ago

It would work if it were apes though.

Source: it did. Shakespeare existed.

[–] ubergeek@lemmy.today 2 points 3 hours ago* (last edited 3 hours ago) (2 children)

It's a probability thing, and it's actually true, due to pure chance. Same reason it's nigh impossible for our planet to be the only one in the cosmos with intelligent life. It's also the reason that any finite number pattern can be eventually found as a series of digits in pi.

[–] john89@lemmy.ca -2 points 1 hour ago (1 children)

It's absolutely not a probability thing, nor is it true, nor is it the same reason for any of those other things.

You need to learn to stop letting others do your thinking for you. Open your eyes.

[–] ubergeek@lemmy.today 1 points 1 hour ago (1 children)

It very much is a probability thing... A pool of monkeys, smashing on a keyboard for an infinite amount of time will eventually generate all combinations of text possible. The longer the process goes on, the closer the probability approaches 1.

You need to learn to stop letting others do your thinking for you. Open your eyes.

Ibid.

[–] intensely_human@lemm.ee 1 points 3 hours ago (1 children)

AI does not operate on pure chance. It’s a bad analogy.

[–] ubergeek@lemmy.today 2 points 1 hour ago

A lot of it is, in fact, deterministic probability...

[–] RedWeasel@lemmy.world 42 points 1 day ago (11 children)

This isn’t exactly new. I heard a few years ago about a situation where the ai had these wires on the chip that should not do anything as they didn’t go anywhere , but if they removed it the chip stopped working correctly.

[–] intensely_human@lemm.ee 3 points 3 hours ago

So the wires did something

[–] rezifon@lemmy.world 7 points 13 hours ago* (last edited 13 hours ago) (1 children)

It may interest you to know that the switch still exists. https://github.com/PDP-10/its/issues/1232

[–] drosophila@lemmy.blahaj.zone 51 points 1 day ago (1 children)

That was a different technique, using simulated evolution in an FPGA.

An algorithm would create a series of random circuit designs, program the FPGA with them, then evaluate how well each one accomplished a task. It would then take the best design, create a series of random variations on it, and select the best one. Rinse and repeat until the circuit is really good at performing the task.

[–] RedWeasel@lemmy.world 7 points 1 day ago (1 children)

I think this is what I am thinking of. Kind of a predecessor of modern machine learning.

[–] CommanderCloon@lemmy.ml 13 points 1 day ago (1 children)

It is a form of machine learning

[–] barsoap@lemm.ee 3 points 4 hours ago

Which is just stochastic optimisation.

Which yes is exactly what evolution does, big picture. Small picture the genome evolves a bit more intelligently, using not random generation and filtering but an algorithm employing randomness to generate, and then the usual survival filter because doing it that way is, well, fitter. Also what you can see under a microscope.

[–] CandleTiger@programming.dev 25 points 1 day ago (2 children)

I don’t know about AI involvement but this story in general is very very old.

http://www.catb.org/jargon/html/magic-story.html

[–] massive_bereavement@fedia.io 10 points 1 day ago (2 children)

I thought of this as well. In fact, as a bit of fun I added a switch to a rack at our lab in a similar way with the same labels. This one though does nothing, but people did push the "turbo" button on old pc boxes despite how often those buttons weren't connected.

[–] ReallyActuallyFrankenstein@lemmynsfw.com 2 points 10 hours ago (1 children)

Some weren't connected? For most PCs that had it, it was a real thing, though counterintuitive and marketing-speak, because enabling "turbo" was just normal speed and disabling would run in a slower mode for compatibility.

https://en.wikipedia.org/wiki/Turbo_button

[–] massive_bereavement@fedia.io 2 points 7 hours ago

After the 486, there were pentiums built at shops that still used 486 cases. In my experience the button wasn't plugged in.

[–] Gormadt@lemmy.blahaj.zone 9 points 1 day ago

My turbo button was connected to an LED but that was it

load more comments (1 replies)
[–] fl42v@lemmy.ml 8 points 1 day ago

Yeah, I've stumbled upon that one a while back too, probably. Was it also the one where the initial designs would refuse to work outside the room temperature 'til the ai was asked to take temps into account?

[–] db2@lemmy.world 10 points 1 day ago (2 children)

Sounds like RF reflection used like a data capacitor or something.

[–] piecat@lemmy.world 2 points 52 minutes ago

Yeah, that probably sounds so unintuitive and weird to anyone who has never worked with RF.

[–] GreyEyedGhost@lemmy.ca 11 points 1 day ago

The particular example was getting clock-like behavior without a clock. It had an incomplete circuit that used RF reflection or something very similar to simulate a clock. Of course, removing this dead-end circuit broke the design.

load more comments (5 replies)
[–] Lettuceeatlettuce@lemmy.ml 38 points 1 day ago (7 children)

"We are coming up with structures that are complex and look randomly shaped, and when connected with circuits, they create previously unachievable performance. Humans cannot really understand them, but they can work better."

Great, so we will eventually have black box chips running black box algorithms for corporations where every aspect of the tech is proprietary and hidden from view with zero significant oversight by actual people...

The true cyber-dystopia.

[–] KeenFlame@feddit.nu -1 points 5 hours ago (1 children)

Man so you have personally vetted all code your devices execute? It's already true

[–] Lettuceeatlettuce@lemmy.ml 1 points 4 hours ago (1 children)

The point is that it actually can be vetted.

[–] KeenFlame@feddit.nu 0 points 3 hours ago (1 children)

But.. It already can't. That's not possible for you. Is that actually something you chose to downvote and ignore instead of responding to?

[–] Lettuceeatlettuce@lemmy.ml 1 points 3 hours ago

You must be a bot, you don't understand the semantics. Ironic, and blocked.

[–] Doorbook@lemmy.world 5 points 23 hours ago

This has been going on in chess for a while as well. Computer can detect patterns that human cannot because it has a better memory and knowledge base.

load more comments (5 replies)
[–] Flaqueman@sh.itjust.works 13 points 1 day ago (6 children)

See? I want this kind of AI. Not a word dreaming algorithm that spews misinformation

[–] KeenFlame@feddit.nu 1 points 5 hours ago

They are all of the same breed and it's an ongoing field of study. The megacorps have soiled the use of them but they are still extremely strong support tools for some things, like detecting cancer on xrays and stuff

[–] brlemworld@lemmy.world 4 points 21 hours ago

I want AI that takes a foreign language movie, and augments their face and mouth so it looks like they are speaking my language, and also changes their voice (not a voice over) to be in my language.

[–] FourPacketsOfPeanuts@lemmy.world 16 points 1 day ago (3 children)

Read the article, it's still 'dreaming' and spewing garbage, it's just that in some iterations it's gotten lucky. "Human oversight needed" they say. The AI has no idea what it's doing.

[–] Flaqueman@sh.itjust.works 16 points 1 day ago

Yeah I got that. But I still prefer "AI doing science under a scientist's supervision" over "average Joe can now make a deepfake and publish it for millions to see and believe"

load more comments (2 replies)
[–] Dkarma@lemmy.world 10 points 1 day ago (3 children)

This is what most all ai is. Gpt models are a tiny subsect.

load more comments (3 replies)
[–] fl42v@lemmy.ml 4 points 1 day ago

Idk, kinda the same, but instead of misinformation we get ICs that release a cloud of smoke in a shape of a cat when presented with specific pattern of inputs (or smth equally batshit crazy)

load more comments (1 replies)
load more comments
view more: next ›