[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
[0] https://www.qut.edu.au/news/realfocus/deaths-linked-to-chatb...
[1] https://www.theguardian.com/uk-news/2023/jul/06/ai-chatbot-e...
I'm increasingly coming around to the notion that AI tooling should have safety features concerned with not directly exposing humans to asymptotically increasing levels of 'convincingness' in generated output. Something like a weaker model used as a buffer.
Projecting out to 5-10 years: what happens when LLMs are still producing hallucinatory semi-sense, but merely comprehending it makes the machine temporarily own you? A bit like getting hair caught in an angle grinder, that.
Like most safety regulations, it'll take blood for the inking. Exposing mass numbers of people to these models strikes me as wildly negligent if we expect continued improvement along this axis.