←back to thread

1246 points adrianh | 2 comments | | HN request time: 0.442s | source
Show context
thih9 ◴[] No.44493538[source]
What made ChatGPT think that this feature is supported? And a follow up question - is that the direction SEO is going to take?
replies(3): >>44493563 #>>44495785 #>>44497834 #
1. swalsh ◴[] No.44493563[source]
Id guess the answer is gpt4o is an outdated model that's not as anchored in reality as newer models. It's pretty rare for me to see sonnet or even o3 just outright tell me plausible but wrong things.
replies(1): >>44495733 #
2. antonvs ◴[] No.44495733[source]
Hallucinations still occur regularly in all models. It’s certainly not a solved problem. If you’re not seeing them, either the kinds of queries you’re doing don’t tend to elicit hallucinations, or you’re incorrectly accepting them as real.

The example in the OP is a common one: ask a model how to do something with a tool, and if there’s no easy way to perform that operation they’ll commonly make up a plausible answer.