←back to thread

1245 points adrianh | 4 comments | | HN request time: 0.828s | source
1. kelseyfrog ◴[] No.44491336[source]
> Should we really be developing features in response to misinformation?

Creating the feature means it's no longer misinformation.

The bigger issue isn't that ChatGPT produces misinformation - it's that it takes less effort to update reality to match ChatGPT than it takes to update ChatGPT to match reality. Expect to see even more of this as we match toward accepting ChatGPT's reality over other sources.

replies(3): >>44492728 #>>44492902 #>>44493676 #
2. mnw21cam ◴[] No.44492728[source]
I'd prefer to think about this more along the lines of developing a feature that someone is already providing advertising for.
3. pmontra ◴[] No.44492902[source]
How many times did a salesman sell features that didn't exist yet?

If a feature has enough customers to pay for itself, develop it.

4. xp84 ◴[] No.44493676[source]
This seems like such a negative framing. LLMs are (~approximately) predictors of what's either logical or at least probable. For areas where what's probable is wrong and also harmful, I don't think anybody is motivated to "update reality" as some kind of general rule.