←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.21s | source
Show context
low_tech_punk ◴[] No.44485245[source]
The anthropomorphic view of LLM is a much better representation and compression for most types of discussions and communication. A purely mathematical view is accurate but it isn’t productive for the purpose of the general public’s discourse.

I’m thinking a legal systems analogy, at the risk of a lossy domain transfer: the laws are not written as lambda calculus. Why?

And generalizing to social science and humanities, the goal shouldn’t be finding the quantitative truth, but instead understand the social phenomenon using a consensual “language” as determined by the society. And in that case, the anthropomorphic description of the LLM may gain validity and effectiveness as the adoption grows over time.

replies(2): >>44485265 #>>44485424 #
andyferris ◴[] No.44485424[source]
I've personally described the "stochastic parrot" model to laypeople who were worried about AI and they came away much more relaxed about it doing something "malicious". They seemed to understand the difference between "trained at roleplay" and "consciousness".

I don't think we need to simplify it to the point of considering it sentient to get the public to interact with it successfully. It causes way more problems than it solves.

replies(1): >>44485460 #
1. SpicyLemonZest ◴[] No.44485460[source]
Am I misunderstanding what you mean by "malicious"? It sounds like the stochastic parrot model wrongly convinced these laypeople you were talking to that they don't need to worry about LLMs doing bad things. That's definitely been my experience - the people who tell me the most about stochastic parrots are the same ones who tell me that it's absurd to worry about AI-powered disinformation or AI-powered scams.