this post was submitted on 07 Oct 2024
119 points (100.0% liked)
Technology
37800 readers
138 users here now
A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.
Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.
Subcommunities on Beehaw:
This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Possible or not I don't think we'll get to the point of AGI. I'm pretty sure at some point someone will do something monumentally stupid with AI that will wipe out humanity.
Like wrecking the biosphere in its persuit.
Maybe. But I have a feeling it'll be a dumb single mistake that'll make someone say "ah, shit" just before we're wiped out.
When the Soviets trained anti-tank dogs in WW2 they did so on tanks that weren't running to save fuel: "Their deployment revealed some serious problems... In the field, the dogs refused to dive under moving tanks." https://en.m.wikipedia.org/wiki/Anti-tank_dog
History is littered with these kinds of mistakes. It would only take one military AI with access to autonomous weapons to have a similar issue in it's training data to potentially kill us all.
Why in God's name would we put weapons that pose a legitimate threat to the whole of humanity under the control of an ai? I just don't think this one sounds plausible.