←back to thread

265 points ctoth | 3 comments | | HN request time: 0.641s | source
Show context
sejje ◴[] No.43744995[source]
In the last example (the riddle)--I generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

I would do the same thing, I think. It's too well-known.

The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.

I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.

replies(5): >>43745113 #>>43746264 #>>43747336 #>>43747621 #>>43751793 #
Jensson ◴[] No.43745113[source]
> generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.

replies(6): >>43745166 #>>43745289 #>>43745300 #>>43745301 #>>43745340 #>>43754148 #
1. sejje ◴[] No.43745301[source]
> it's a text completer

Yes, and it can express its assumptions in text.

Ask it to make some assumptions, like about a stack for a programming task, and it will.

Whether or not the mechanism behind it feels like real thinking to you, it can definitely do this.

replies(1): >>43746266 #
2. wobfan ◴[] No.43746266[source]
If you call putting text together that reads like an assumption, then yes. But it cannot express assumption, as it is not assuming. It is completing text, like OP said.
replies(1): >>43746472 #
3. ToValueFunfetti ◴[] No.43746472[source]
It's trained to complete text, but it does so by constructing internal circuitry during training. We don't have enough transparency into that circuitry or the human brain's to positively assert that it doesn't assume.

But I'd wager it's there; assuming is not a particularly impressive or computationally intense operation. There's a tendency to bundle all of human consciousness into the definitions of our cognitive components, but I would argue that, eg., a branch predictor is meeting the bar for any sane definition of 'assume'.