←back to thread

1244 points adrianh | 5 comments | | HN request time: 0.829s | source
Show context
gortok ◴[] No.44495659[source]
I think folks have taken the wrong lesson from this.

It’s not that they added a new feature because there was demand.

They added a new feature because technology hallucinated a feature that didn’t exist.

The savior of tech, generative AI, was telling folks a feature existed that didn’t exist.

That’s what the headline is, and in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again, because next time it might not be so benign as it was this time.

replies(7): >>44495919 #>>44496083 #>>44496091 #>>44497641 #>>44498195 #>>44500852 #>>44505736 #
nomel ◴[] No.44496083[source]
> in a sane world the folks that run ChatGPT would be falling over themselves to be sure it didn’t happen again

This would be a world without generative AI available to the public, at the moment. Requiring perfection would either mean guardrails that would make it useless for most cases, or no LLM access until AGI exists, which are both completely irrational, since many people are finding practical value in its current imperfect state.

The current state of LLM is useful for what it's useful for, warnings of hallucinations are present on every official public interface, and its limitations are quickly understood with any real use.

Nearly everyone in AI research is working on this problem, directly or indirectly.

replies(3): >>44496098 #>>44496511 #>>44496702 #
1. gortok ◴[] No.44496098[source]
No one is “requiring perfection”, but hallucination is a major issue and is in the opposite direction of the “goal” of AGI.

If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.

replies(1): >>44496156 #
2. nomel ◴[] No.44496156[source]
> No one is “requiring perfection”

> If “don’t hallucinate” is too much to ask then ethics flew out the window long ago.

Those sentences aren't compatible.

> but hallucination is a major issue

Again, every official public AI interface has warnings/disclaimers for this issue. It's well known. It's not some secret. Every AI researcher is directly or indirectly working on this.

> is in the opposite direction of the “goal” of AGI

This isn't a logical statement, so it's difficult to respond to. Hallucination isn't a direction that's being headed towards, it's being actively, with intent and $$$, headed away from.

replies(1): >>44497457 #
3. lucianbr ◴[] No.44497457[source]
> Those sentences aren't compatible.

My web browser isn't perfect, but it does not hallucinate inexistent webpages. It sometimes crashes, it sometimes renders wrong, it has bugs and errors. It does not invent plausible-looking information.

There really is a lot middle gound between perfect and "accept anything we give you, no matter how huge the problems".

replies(2): >>44497901 #>>44499185 #
4. jeffhuys ◴[] No.44497901{3}[source]
Different tech, different failure modes.

> it sometimes renders wrong

Is close to equivalent.

5. tucnak ◴[] No.44499185{3}[source]
> It does not invent plausible-looking information.

This is where your analogy is falling apart; of course web browsers do not "invent plausible-looking information" because they don't invent anything in the first place! Web browsers represent a distinct set of capabilities, and as you correctly pointed out, these are often riddled with bugs and errors. If I was making a browser analogy, I would point towards fingerprinting; most browsers reveal too much information about any given user and system, either via cross-site cookies, GPU prints, and whatnot. This is an actual example where "ethics flew out the window long ago."

As the adjacent commenter pointed out: different software, different failure modes.