←back to thread

1246 points adrianh | 1 comments | | HN request time: 0s | source
Show context
gortok ◴[] No.44495659[source]
I think folks have taken the wrong lesson from this.

It’s not that they added a new feature because there was demand.

They added a new feature because technology hallucinated a feature that didn’t exist.

The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.

That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.

replies(7): >>44495919 #>>44496083 #>>44496091 #>>44497641 #>>44498195 #>>44500852 #>>44505736 #
nomel ◴[] No.44496083[source]
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again

This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.

The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.

Nearly everyone in AI research is working on this problem, directly or indirectly.

replies(3): >>44496098 #>>44496511 #>>44496702 #
Velorivox ◴[] No.44496511[source]
> which are both completely irrational

Really!?

[0] https://i.imgur.com/ly5yk9h.png

replies(1): >>44499203 #
tucnak ◴[] No.44499203[source]
Your screenshot is conveniently omitting the disclaimer below: "AI responses may include mistakes. Learn more[1]"

[1]: https://support.google.com/websearch/answer/14901683

replies(1): >>44499908 #
Velorivox ◴[] No.44499908{3}[source]
It isn't doing anything "conveniently", I was not shown the disclaimer (nor anything else, I assume it mostly failed to load).

In any case, if you really believe a disclaimer makes it okay for Google to display blatant misinformation in a first-party capacity, we have little to discuss.

replies(1): >>44500306 #
tucnak ◴[] No.44500306{4}[source]
https://www.google.com/search?q=is+all+of+oregon+north+of+ne...

Show more -> Disclaimer and the feedback buttons are shown at the end. If you bothered enough to read the full response, you would have seen the disclaimer, but you never did, so you haven't. For something to be considered "misinformation," in the very least the subject of speech has to be asserting its truthfulness—and, indeed, Google makes no such claims. The claim they're making is precisely that its search result-embedded "[..] responses may include mistakes." In this specific case, they are not asserting truthfulness.

FWIW, Gemini 2.5 Pro answers the question correctly.

The search hints are clearly a low-compute first approximation, which is probably correct for most trivial questions which is probably the majority of user queries, and it's not surprising that it fails in this specific instance. The application doesn't allow for reasoning due to scale; even Google cannot afford to run reasoning traces on every search question. I concur that there's very little to discuss: you seemingly made up your mind re: LLM technology, and I doubt you will appreciate the breaking-up of your semantics to begin with.

replies(1): >>44502060 #
1. ◴[] No.44502060{5}[source]