> I'm not aware of anything covering this, but I think there's some interesting potential looking into how humans see technology as more human if they can communicate with it in a human way, regardless of whether or not it otherwise displays aspects of humanity. Generative AI falls into this category too I think. People view it as way more intelligent than it actually is because you can sort of converse with it like a human.
I don't know why, but certain types of people seem easily fooled into thinking that LLMs really are like a real person. I have to imagine that either these people don't actually need things that currently only a real person can provide, or they're just happy enough with what the LLM spits out to be unable to tell the difference.
Which doesn't make any sense to me, because whenever I talk to an LLM, I can pretty easily tell that it's nowhere close to a real person. As an example, I never use LLMs for conversation, because speaking to one is not in any way fulfilling to me like speaking to another real person is. I usually use LLMs for creative writing instead, but they're terrible at things that haven't already been exactly seen in their training data. They're not nearly as generalizable as the media would have you believe. All they can do is spit out sentences that look like sentences from real stories, they don't actually have any conception of the story or visualization of the scene that they then can describe like a person would. They don't actually simulate any of the story or imagine anything like I do.
I have to wonder if the people who are so fooled by LLMs are just non-autistic people. If non-autistic brains work based on patterns rather than how I work based on strict logic, that could explain why something that appears to show patterns of a person would then be perceived as a person to them.
But I dunno, that suggests that non-autistic people are somehow generally simpler/dumber than autistics and I wouldn't want to just assume that.