←back to thread

1246 points adrianh | 1 comments | | HN request time: 0.21s | source
Show context
kelseyfrog ◴[] No.44491336[source]
> Should we really be developing features in response to misinformation?

Creating the feature means it's no longer misinformation.

The bigger issue isn't that ChatGPT produces misinformation - it's that it takes less effort to update reality to match ChatGPT than it takes to update ChatGPT to match reality. Expect to see even more of this as we match toward accepting ChatGPT's reality over other sources.

replies(3): >>44492728 #>>44492902 #>>44493676 #
1. xp84 ◴[] No.44493676[source]
This seems like such a negative framing. LLMs are (~approximately) predictors of what's either logical or at least probable. For areas where what's probable is wrong and also harmful, I don't think anybody is motivated to "update reality" as some kind of general rule.