←back to thread

324 points rntn | 1 comments | | HN request time: 0s | source
Show context
cakealert ◴[] No.44612557[source]
EU regulations are sometimes able to bully the world into compliance (eg. cookies).

Usually minorities are able to impose "wins" on a majority when the price of compliance is lower than the price of defiance.

This is not the case with AI. The stakes are enormous. AI is full steam ahead and no one is getting in the way short of nuclear war.

replies(2): >>44612773 #>>44613635 #
oaiey ◴[] No.44612773[source]
But AI also carries tremendous risks, from something simple as automating warfare to something like a evil AGI.

In Germany we have still traumas from automatic machine guns setup on the wall between East and West Germany. The Ukraine is fighting a drone war in the trenches with a psychological effect on soldiers comparable to WWI.

Stake are enormous. Not only toward the good. There is enough science fiction written about it. Regulation and laws are necessary!

replies(4): >>44613225 #>>44614062 #>>44614492 #>>44614965 #
zettabomb ◴[] No.44613225[source]
I don't disagree that we need regulation, but I also think citing literal fiction isn't a good argument. We're also very, very far away from anything approaching AGI, so the idea of it becoming evil seems a bit far fetched.
replies(4): >>44613315 #>>44613507 #>>44614507 #>>44614851 #
1. ben_w ◴[] No.44613507[source]
I agree fiction is a bad argument.

On the other hand, firstly every single person disagrees what the phrase AGI means, varying from "we've had it for years already" to "the ability to do provably impossible things like solve the halting problem"; and secondly we have a very bad track record for knowing how long it will take to invent anything in the field of AI with both positive and negative failures, for example constantly thinking that self driving cars are just around the corner vs. people saying an AI that could play Go well was "decades" away a mere few months before it beat the world champion.