←back to thread

1246 points adrianh | 6 comments | | HN request time: 0.829s | source | bottom
1. thih9 ◴[] No.44493538[source]
What made ChatGPT think that this feature is supported? And a follow up question - is that the direction SEO is going to take?
replies(3): >>44493563 #>>44495785 #>>44497834 #
2. swalsh ◴[] No.44493563[source]
Id guess the answer is gpt4o is an outdated model that's not as anchored in reality as newer models. It's pretty rare for me to see sonnet or even o3 just outright tell me plausible but wrong things.
replies(1): >>44495733 #
3. antonvs ◴[] No.44495733[source]
Hallucinations still occur regularly in all models. It’s certainly not a solved problem. If you’re not seeing them, either the kinds of queries you’re doing don’t tend to elicit hallucinations, or you’re incorrectly accepting them as real.

The example in the OP is a common one: ask a model how to do something with a tool, and if there’s no easy way to perform that operation they’ll commonly make up a plausible answer.

4. antonvs ◴[] No.44495785[source]
> What made ChatGPT think that this feature is supported?

It was a plausible answer, and the core of what these models do is generate plausible responses to (or continuations of) the prompt they’re given. They’re not databases or oracles.

With errors like this, if you ask a followup question it’ll typically agree that the feature isn’t supported, because the text of that question combined with its training essentially prompts it to reach that conclusion.

Re the follow-up question, it’s almost certainly the direction that advertising in general is going to take.

5. poulpy123 ◴[] No.44497834[source]
Nothing. A LLM doesn't think, it just gives probability to words
replies(1): >>44498334 #
6. thih9 ◴[] No.44498334[source]
Note that I am replying to the submission and reusing the wording from its title.

Also, I’m not suggesting an LLM is actually thinking. We’ve been using “thinking” in a computing context for a long time.