←back to thread

1257 points adrianh | 2 comments | | HN request time: 0.415s | source
Show context
gortok ◴[] No.44495659[source]
I think folks have taken the wrong lesson from this.

It’s not that they added a new feature because there was demand.

They added a new feature because technology hallucinated a feature that didn’t exist.

The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.

That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.

replies(7): >>44495919 #>>44496083 #>>44496091 #>>44497641 #>>44498195 #>>44500852 #>>44505736 #
nomel ◴[] No.44496083[source]
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again

This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.

The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.

Nearly everyone in AI research is working on this problem, directly or indirectly.

replies(3): >>44496098 #>>44496511 #>>44496702 #
epidemian ◴[] No.44496702[source]
> Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists

What?? What does AGI have to do with this? (If this was some kind of hyperbolic joke, sorry, i didn't get it.)

But, more importantly, the GP only said that in a sane world, the ChatGPT creators should be the ones trying to fix this mistake on ChatGPT. After all, it's obviously a mistake on ChatGPT's part, right?

That was the main point of the GP post. It was not about "requiring perfection" or something like that. So please let's not attack a straw man.

replies(1): >>44497790 #
nomel ◴[] No.44497790[source]
> What does AGI have to do with this?

Their requirement is no hallucinations [1], also stated as "be sure it didn't happen again" in the original comment. If you define a hallucination as something that wasn't in the training data, directly or indirectly (indirectly being something like an "obvious" abstract concept), then you've placed a profound constraint on the system, requiring determinism. That requirement fundamentally, by the non-deterministic statistics that these run on, means you cannot use an LLM, as they exist today. They're not "truth" machines - use a database instead.

Saying "I don't know", with determinism is only slightly different than saying "I know" with determinism, since it requires being fully aware of what you do know, not at a fact level, but at a conceptual/abstract level. Once you have a system that fully reasons about concepts, is self aware of its own knowledge, and can find the fundamental "truth" to answer a question with determinism, you have something indistinguishable from AGI.

Of course, there's a terrible hell that lives between those two, in the form of: "Error: Question outside of known questions." I think a better alternative to this hell would be a breakthrough that allowed "confidence" to be quantified. So, accept that hallucinations will exist, but present uncertainty to the user.

[1] https://news.ycombinator.com/item?id=44496098

replies(2): >>44498919 #>>44501513 #
1. penteract ◴[] No.44498919[source]
You have a very strong definition of AGI. "Never being wrong" is something that humans fall far short of.
replies(1): >>44505243 #
2. nomel ◴[] No.44505243[source]
That's not my definition of AGI. To simplify what I said, "never being wrong" (aka "don't hallucinate") requires a level of agency and rigor that could only be achieved by something that would be an AGI. I said "determinism would require an AGI", not "AGI are deterministic".

Note that "never being wrong" can also be achieved by an "I looked into it, and there's no clear answer.", which is the correct answer for many questions (humans not required).