this post was submitted on 02 Oct 2024
238 points (98.8% liked)

News

23376 readers
2132 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious right or left wing sources will be removed at the mods discretion. We have an actively updated blocklist, which you can see here: https://lemmy.world/post/2246130 if you feel like any website is missing, contact the mods. Supporting links can be added in comments or posted seperately but not to the post body.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source.


Posts which titles don’t match the source won’t be removed, but the autoMod will notify you, and if your title misrepresents the original article, the post will be deleted. If the site changed their headline, the bot might still contact you, just ignore it, we won’t delete your post.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials or celebrity gossip is allowed. All posts will be judged on a case-by-case basis.


7. No duplicate posts.


If a source you used was already posted by someone else, the autoMod will leave a message. Please remove your post if the autoMod is correct. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners.


The auto mod will contact you if a link shortener is detected, please delete your post if they are right.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] ArbitraryValue@sh.itjust.works -4 points 1 month ago* (last edited 1 month ago) (4 children)

a Ghost Robotics Vision 60 Quadrupedal-Unmanned Ground Vehicle, or Q-UGV, armed with what appears to be an AR-15/M16-pattern rifle on rotating turret undergoing "rehearsals" at the Red Sands Integrated Experimentation Center in Saudi Arabia

They're not being used in combat.

With that aside, I appear to be the only one here who thinks this is a great idea. AI can make mistakes, but the goal isn't perfection. It's just to make fewer mistakes than a human soldier does. (Or at least fewer mistakes than a bomb does, which is really easy.)

Plus, automation can address the problem Western countries have with unconventional warfare, which is that Western armies are much less willing to have soldiers die than their opponents are. Sufficiently determined guerrillas who can tolerate high losses can inflict slow but steady losses on Western armies until the Western will to fight is exhausted. If robots can take the place of human infantry, the advantage shifts back from guerrillas to countries with high-tech manufacturing capability.

[–] dank@lemmy.today 2 points 1 month ago

Imperialism go brrrrr

[–] smooth_tea@lemmy.world 2 points 1 month ago

Fewer mistakes might be a side-effect, but the real reason why this will be welcomed by the military and our dear leaders is because they don't have to stir up the public emotionally so that we give up our sons and daughters. It will further reduce our opposition to war because "the only people dying are the bad ones". I can't wait to read how the next model will reduce the false positive rate with another percentage point. Of course, I think it requires little imagination or intellect to figure out what the net result will be when the most noteworthy information we get from a war is the changelog from its soldiers, who have zero emotional response to taking a life.

Just like tasers were introduced to reduce gun incidents and are now often used as a form of cattle prod, they will function creep the shit out of this, and our adaptation to the idea of robots doing the killing will be over before we've perfected the technology.

It was unavoidable though, someone always has to have the biggest gun. It's not our technological advancement that has to adapt to our mentality, we have to adapt to technological advancement. Perhaps the nuclear bomb was simply not frightening enough to change our ways.

[–] njm1314@lemmy.world 2 points 1 month ago (1 children)

Of course, totally not used in combat. That's why they strapped the AR-15 to it. AR-15 famously has no use in combat.

[–] ArbitraryValue@sh.itjust.works 1 points 1 month ago

I'm not saying they aren't intended to be used in combat. Of course they (or more sophisticated future robots for which they are the prototypes) are. I'm saying that they're not being used in combat right now.

[–] blackstampede@sh.itjust.works 2 points 1 month ago* (last edited 1 month ago) (1 children)

I attended a federal contracting conference a few months ago, and they had one of these things (or a variant) walking around the lobby.

From talking to the guy who was babysitting it, they can operate autonomously in units or be controlled in a general way (think higher level unit deployment and firing policies rather than individual remote control) given a satellite connection. In a panel at the same conference, they were discussing AI safety, and I asked:

Given that AI seems to be developing from less complex tasks like chess (which is still complicated, obviously, but a constrained problem) to more complex and ill-defined tasks like image generation, it seems that it's inevitable that we will develop AI capable of providing strategic or tactical plans, if we haven't already. If two otherwise-equally-matched military units are fighting, it seems reasonable to believe that the one using an AI to make decisions within seconds would win over the one with human leadership, simply because they would react more quickly to changing battlefield conditions. This would place an enormous incentive on the US military to adopt AI assisted strategic control, which would likely lead to units of autonomous weapons which are also controlled by an autonomous system. Do any of you have any concerns about this, and if so, do you have any ideas about how we can mitigate the problem.

(Paraphrasing, obviously, but this is close)

The panel members looked at each other, looked at me, smiled, shrugged, and didn't say anything. The moderator asked them explicitly if they would like to respond, and they all declined.

I think we're at the point where an AI could be used to create strategies, and I would be very surprised if no one were trying to do this. We already have autonomous weapons, and it's only a matter of time before someone starts putting them together. Yeah, they will generally act reasonably, because they'll be trained on human tactics in a variety of scenarios, but that will be cold comfort to dead civilians who happened to get in the way of a hallucinating strategic model.

EDIT: I know I'm not actually addressing anything you said, but you seem to have thought about this a bit, and I was curious about what you thought of this scenario.

[–] ArbitraryValue@sh.itjust.works 1 points 1 month ago* (last edited 1 month ago)

My guess is that they didn't answer your question because they had strict instructions not to stray from the script on this topic. Saying the wrong thing could lead to a big PR problem, so I don't expect that people working in this field would be willing to have a candid public discussion even about topics to which they have given a lot of thought. I do expect that they have given the ability of AI to obey orders accurately a lot of thought at least due to practical (if not ethical) concerns.

I mean, I am currently willing to say "the AIs will almost definitely kill civilians but we should build them anyway" because I don't work in defense. However, even I'm a little nervous saying that because one day I might want to. My friends who do work in defense have told me that the people who gave them clearance did investigate their online presence. (My background is in computational biochemistry but I look at what's going on in AI and I feel like nothing else is important in comparison.)

As for cold comfort: I think autonomous weapons are inevitable in the same way that the atom bomb was inevitable. Even if no one wants to see it used, everyone wants to have it because enemies will. However, I don't see a present need for strategic (as opposed to tactical) automation. A computer would have an advantage in battlefield control but strategy takes hours or days or years and so a human's more reliable ability to reason would be more important in that domain.

Once a computer can reason better than a human can, that's the end of the world as we know it. It's also inevitable like the atom bomb.