←back to thread

617 points jbegley | 1 comments | | HN request time: 0.205s | source
Show context
A4ET8a8uTh0_v2 ◴[] No.42940660[source]
I want to be upset over this in an exasperated expression of oddly naive "why can't we all get along?" frame of mind. I want to, because I know how I would like the world to look like, but as a species we, including myself, continually fail to disappoint when it comes nearly guaranteed self-destruction.

I want to get upset over it, but I sadly recognize the reality of the why this is not surprising to anyone. We actually have competitors in that space, who will do that and more. We already have seen some of the more horrifying developments in that area.. and, when you think about it, those are the things that were allowed to be shown publicly. All the fun stuff is happening behind closed doors away from social media.

replies(9): >>42940696 #>>42941054 #>>42941060 #>>42941115 #>>42941183 #>>42941453 #>>42941855 #>>42941871 #>>42941899 #
mkolodny ◴[] No.42941899[source]
A vague “stuff is happening behind closed doors” isn’t enough of a reason to build AI weapons. If you shared a specific weapon that could only be countered with AI weapons, that might make me feel differently. But right now I can’t imagine a reason we’d need or want robots to decide who to kill.

When people talk about AI being dangerous, or possibly bringing about the end of the world, I usually disagree. But AI weapons are obviously dangerous, and could easily get out of control. Their whole point is that they are out of control.

The issue isn’t that AI weapons are “evil”. It’s that value alignment isn’t a solved problem, and AI weapons could kill people we wouldn’t want them to kill.

replies(4): >>42941949 #>>42942109 #>>42942160 #>>42942495 #
1. computerthings ◴[] No.42942495[source]
> AI weapons are obviously dangerous, and could easily get out of control.

The real danger is when they can't. When they, without hesitation or remorse, kill one or millions of people with maximum efficiency, or "just" exist with that capability, to threaten them with such a fate. Unlike nuclear weapons, in case of a stalemate between superpowers they can also be turned inwards.

Using AI for defensive weapons is one thing, and maybe some of those would have to shoot explosives at other things to defend; but just going with "eh, we need to have the ALL possible offensive capability to defend against ANY possible offensive capability" is not credible to me.

The threat scenario is supposed to be masses of enemy automated weapons, not huddled masses; so why isn't the objective to develop weapons that are really good at fighting automatic weapons, but literally can't/won't kill humans, because that's would remain something only human soldiers do? Quite the elephant on the couch IMO.