It's sad that it's now an increasingly accepted idea that information one seeks can be "harmful".
Exactly as harmful.
> or a blog post incorrectly written (whether in bad spirit or by accident)
Exactly as harmful.
I believe in content moderation for all public information platforms. HN is a good example.
Consider asking 'how do I replace a garage door torsion spring?'. The typical, overbearing response on low-quality DIY forums is that attempting to do so will likely result in grave injury or death. However, the process, with correct tools and procedure, is no more dangerous than climbing a ladder or working on a roof - tasks that don't seem to result in the same paternalistic response.
I'd argue a properly-disclaimered response that outlines the required tools, careful procedure, and steps to lower the chance of injury is far safer than a blanket 'do never attempt'. The latter is certainly easier, however.
This can only be provided by an expert, and LLMs currently aren't experts. They can give expert-level output, but they don't know if they have the right knowledge, so it's not the same.
If an AI can accurately represent itself as an expert in a dangerous topic, sure, it's fine for it to give out advice. As the poster above said, a mushroom-specific AI could potentially be a great thing to have in your back pocket while foraging. But ChatGPT? Current LLMs should not be giving out advice on dangerous topics because there's no mechanism for them to act as an expert.
Humans have broadly 3 modes of knowledge-holding:
1) We know we don't know the answer. This is "Don't try to fix your garage door, because it's too dangerous [because I don't know how to do it safely]."
2) We know we know the answer, because we're an expert and we've tested and verified our knowledge. This is the person giving you the correct and exact steps, clearly instructed without ambiguity, telling you what kinds of mistakes to watch out for so that the procedure is not dangerous if followed precisely.
3) We think we know the answer, because we've learned some information. (This could, by the way, include people who have done the procedure but haven't learned it well enough to teach it.) This is where all LLMs currently are at all times. This is where danger exists. We will tell people to do something we think we understand and find out we were wrong only when it's too late.
It is indeed a problem that LLMs can instill a false sense of trust because it will confidently hallucinate. I see it as an education problem. You know and I know that LLMs can hallucinate and should not be trusted. The rest of the population needs to be educated on this fact as well.