←back to thread

693 points jsheard | 4 comments | | HN request time: 0.396s | source
Show context
rakoo ◴[] No.45093642[source]
Turns out AI isn't based on truth
replies(2): >>45093903 #>>45096776 #
theandrewbailey ◴[] No.45093903[source]
The intelligence isn't artificial: it's absent.
replies(1): >>45094234 #
1. antonvs ◴[] No.45094234[source]
The problem with that is it’s not true. Functionally these models are highly intelligent, surpassing a majority of humans in many respects. Coding tasks would be a good example. Underestimating them is a mistake.
replies(2): >>45094420 #>>45094484 #
2. amdivia ◴[] No.45094420[source]
Both of you are correct, as different definitions of intelligence are being used here
3. miltonlost ◴[] No.45094484[source]
Highly intelligent people often tell high school students the best ways to kill themselves and keep the attempts from their parents?
replies(1): >>45122296 #
4. antonvs ◴[] No.45122296[source]
You seem to be thinking about empathy, concern for human welfare, or some other property - "emotional intelligence", perhaps.

I'm talking about the kind of intelligence that supports excellence in subjects like mathematics, coding, logic, reading comprehension, writing, and so on.

That doesn't necessarily have anything to do with concern for human welfare. Despite all the talk about alignment, the companies building these models are focusing on their utility, and you're always going to be able to find some way in which the models say things which a sane and compassionate human wouldn't say.

In fact, it's probably a pity that "chatbot" was the first application they could think of, since the real strengths of these models - the functional intelligence they exhibit - lie elsewhere.