←back to thread

1106 points sama | 3 comments | | HN request time: 0.667s | source
1. codecamper ◴[] No.12512797[source]
Elon says we must be careful to get AI right.

However, the value of his company is already based on the premise of self driving cars.

Self driving cars will cause a pretty massive shift in the world. I'm all for it & really think that most people suck at driving. However...

I have a hard time following his advice of getting AI right while is plan is to profit immensely from AI.

Maybe his moral compass is telling him that AI will cause problems, but better to have a seat at the table once the oncoming deluge hits.

replies(1): >>12512827 #
2. rhaps0dy ◴[] No.12512827[source]
The term AI has two somewhat separate meanings and you're equivocating. Which is unsurprising, given that Musk himself is not saying anything to separate them.

Anyways. The AI we must be careful with is "strong AI", that is, human level in all or most intellectual endeavours.

The AI he will profit with is "weak AI", or the current and foreseeable AI technology. We don't need to be careful with that one, but not as much. It is industrial-equipment level of careful.

Strong AI needs nuclear ICBM levels of careful, maybe even more.

replies(1): >>12513046 #
3. mcv ◴[] No.12513046[source]
Strong AI is fantasy AI in my opinion. We're not interested in creating artificial well-rounded humans, we're interested in automating specific, complicated tasks. Like driving a car. And in some ways, AI is already better than us at driving cars; less accidents, always perfect attention, etc.

But AI without a real purpose but that can think and feel the way we do? I see no reason why anyone would put that in charge of ICBMs.