←back to thread

617 points jbegley | 10 comments | | HN request time: 0.241s | source | bottom
1. blackeyeblitzar ◴[] No.42938399[source]
Other countries will use AI for weapons - shouldn’t the EU and US also do that to remain competitive?
replies(4): >>42938513 #>>42938767 #>>42938830 #>>42956491 #
2. jsheard ◴[] No.42938513[source]
It's not exactly unheard of for certain weapons to be declared off-limits by most countries even if the "bad guys" are using them - think chemical and biological agents, landmines, cluster munitions, blinding weapons and so on. I doubt there will ever be treaties completely banning any use of AI in warfare but there might be bans on specific applications, particularly using it to make fully autonomous weapons which select and dispatch targets with no human in the loop, for similar reasons to why landmines are mostly banned.
replies(3): >>42938698 #>>42938716 #>>42939158 #
3. geodel ◴[] No.42938698[source]
But AI is not decided to be on that list.
4. nradov ◴[] No.42938716[source]
Landmines and cluster munitions have been among Ukraine's most effective weapons for resisting the Russian invasion. Without those, Ukraine would likely have already lost the war. It's so bizarre how some people who face no real risks themselves think that those weapons should be declared off-limits.
replies(1): >>42938756 #
5. jsheard ◴[] No.42938756{3}[source]
Nobody said they're not effective during a war, the problem is they remain effective against any random civilians who happen to stumble across them for a long time after the war is over. Potentially decades, as seen in Cambodia.

It would be a bit of a Pyrrhic victory to repel an attempted takeover of your land, only for that land to end up contaminated with literally millions of landmines because you didn't have a mutual agreement against using them.

replies(1): >>42939600 #
6. AvAn12 ◴[] No.42938767[source]
Analogy is not apt. If other countries are trying to pry into our data and systems, then the right move for google or any other tech company is to advance our defenses and make cybersecurity stronger, more available, and easier for companies and people to use. If someone is trying to hack me, it's much smarter for me to defend myself rather than try to hack the other guy back.
7. smileson2 ◴[] No.42938830[source]
Personally I don’t care if ML is used for weapons development assuming there are standards

It’s the companies that horde everyone’s personal information, who eroded the concept of privacy while mediating lives with false promises for trust turning into state intelligence agencies that bothers me

The incentives and results become fucked up, safe guards less likely to work I get not a lot of people care but it’s dangerous

8. murderfs ◴[] No.42939158[source]
They're declared off-limits because the military doesn't want them. Biological and chemical weapons aren't useful to modern militaries. Landmines and cluster munitions are, so none of the countries that actually matter have banned them!

This is an excellent overview of why: https://acoup.blog/2020/03/20/collections-why-dont-we-use-ch...

9. nradov ◴[] No.42939600{4}[source]
People who are defending against an existential threat today don't have the luxury of worrying about contamination tomorrow. I think at this point Ukraine will take a Pyrrhic victory if the alternative is their end as a fully sovereign nation state. And let's be clear about the current situation: if Ukraine and Russia had a mutual agreement against using those weapons then Ukraine would probably have already lost. Landmines in particular are extremely effective as a force multiplier for outnumbered defenders.
10. impossiblefork ◴[] No.42956491[source]
Yes, but there should probably be some kind of separation between the AI weapons and surveillance parts and something having to do with providing communications and search services.

It's not really appropriate for an AI weapons firm to be an integrated part of something which has access to information from which sensitive information such political beliefs etc. can be easily extracted.

It's a problem if someone is looking at sensitive user data one day and at how to categorize people so they can be put on kill lists the other.