←back to thread

265 points ctoth | 1 comments | | HN request time: 0.216s | source
Show context
sejje ◴[] No.43744995[source]
In the last example (the riddle)--I generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

I would do the same thing, I think. It's too well-known.

The variation doesn't read like a riddle at all, so it's confusing even to me as a human. I can't find the riddle part. Maybe the AI is confused, too. I think it makes an okay assumption.

I guess it would be nice if the AI asked a follow up question like "are you sure you wrote down the riddle correctly?", and I think it could if instructed to, but right now they don't generally do that on their own.

replies(5): >>43745113 #>>43746264 #>>43747336 #>>43747621 #>>43751793 #
Jensson ◴[] No.43745113[source]
> generally assume the AI isn't misreading, rather that it assumes you couldn't give it the riddle correctly, but it has seen it already.

LLMs doesn't assume, its a text completer. It sees something that looks almost like a well known problem and it will complete with that well known problem, its a problem specific to being a text completer that is hard to get around.

replies(6): >>43745166 #>>43745289 #>>43745300 #>>43745301 #>>43745340 #>>43754148 #
og_kalu ◴[] No.43745340[source]
Text Completion is just the objective function. It's not descriptive and says nothing about how the models complete text. Why people hang on this word, I'll never understand. When you wrote your comment, you were completing text.

The problem you've just described is a problem with humans as well. LLMs are assuming all the time. Maybe you would like to call it another word, but it is happening.

replies(2): >>43745745 #>>43746034 #
Jensson ◴[] No.43746034[source]
> When you wrote your comment, you were completing text.

I didn't train to complete text though, I was primarily trained to make accurate responses.

And no, writing a response is not "completing text", I don't try to figure out what another person would write as a response, I write what I feel people need to read. That is a completely different thought process. If I tried to mimic what another commenter would have written it would look very different.

replies(2): >>43746290 #>>43746503 #
1. og_kalu ◴[] No.43746503[source]
>And no, writing a response is not "completing text", I don't try to figure out what another person would write as a response, I write what I feel people need to read.

Functionally, it is. You're determining what text should follow the prior text. Your internal reasoning ('what I feel people need to read') is how you decide on the completion.

The core point isn't that your internal 'how' is the same as an LLM's (Maybe, Maybe not), but that labeling the LLM as a 'text completer' they way you have is essentially meaningless.

You are just imposing your own ideas on the how a LLM works, not speaking any fundamental truth of being a 'text completer'.