←back to thread

1246 points adrianh | 1 comments | | HN request time: 0.207s | source
Show context
gortok ◴[] No.44495659[source]
I think folks have taken the wrong lesson from this.

It’s not that they added a new feature because there was demand.

They added a new feature because technology hallucinated a feature that didn’t exist.

The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.

That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.

replies(7): >>44495919 #>>44496083 #>>44496091 #>>44497641 #>>44498195 #>>44500852 #>>44505736 #
nomel ◴[] No.44496083[source]
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again

This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.

The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.

Nearly everyone in AI research is working on this problem, directly or indirectly.

replies(3): >>44496098 #>>44496511 #>>44496702 #
epidemian ◴[] No.44496702[source]
> Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists

What?? What does AGI have to do with this? (If this was some kind of hyperbolic joke, sorry, i didn't get it.)

But, more importantly, the GP only said that in a sane world, the ChatGPT creators should be the ones trying to fix this mistake on ChatGPT. After all, it's obviously a mistake on ChatGPT's part, right?

That was the main point of the GP post. It was not about "requiring perfection" or something like that. So please let's not attack a straw man.

replies(1): >>44497790 #
nomel ◴[] No.44497790[source]
> What does AGI have to do with this?

Their requirement is no hallucinations [1], also stated as "be sure it didn't happen again" in the original comment. If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then you've placed a profound constraint on the system, requiring determinism. That requirement fundamentally, by the non-deterministic statistics that these run on, means you cannot use an LLM, as they exist today. They're not "truth" machines - use a database instead.

Saying "I don't know", with determinism is only slightly different than saying "I know" with determinism, since it requires being fully aware of what you do know, not at a fact level, but at a conceptual/abstract level. Once you have a system that fully reasons about concepts, is self aware of its own knowledge, and can find the fundamental "truth" to answer a question with determinism, you have something indistinguishable from AGI.

Of course, there's a terrible hell that lives between those two, in the form of: "Error: Question outside of known questions." I think a better alternative to this hell would be a breakthrough that allowed "confidence" to be quantified. So, accept that hallucinations will exist, but present uncertainty to the user.

[1] https://news.ycombinator.com/item?id=44496098

replies(2): >>44498919 #>>44501513 #
epidemian ◴[] No.44501513[source]
> If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then [...]

Ok, sure. But why would you choose to define hallucinations in a way that is contrary to common sense and the normal understanding of what an AI hallucination is?

The common definition of hallucinations is basically: when AI makes shit up and presents it as fact. (And the more technical definition also basically aligns with that.)

No one would say that if the AI takes the data you provide in the prompt and can deduce a correct answer for that specific data —something that is not directly or indirectly present in its training data— it would be hallucinating. In fact that would be an expected thing for an intelligent system to do.

It seems to me you're trying to discuss with something nobody said. You're making it seem that saying "it's bad that LLMs can invent wrong/misleading information like this and present it as fact, and that the companies that deploy them don't seem to care" is equivalent to "i want LLMs to be perfect and have no bugs whatsoever", and then discuss about how ridiculous is to state the latter.

replies(2): >>44505262 #>>44505280 #
1. ◴[] No.44505262[source]