←back to thread

69 points robaato | 3 comments | | HN request time: 0.421s | source
1. rainsford ◴[] No.44084403[source]
There's some weird anthropomorphization with Alexa and similar voice assistant type devices that seems based less on the data being collected and more on the fact that you're speaking to it instead of typing in queries. This article definitely leans very heavily into that perspective, but doesn't seem to realize it or reflect on why.

As an example, the part of the article about questions his daughter has asked Alexa reflects things no different than ones you might type into a search engine. But he describes it as "Coco’s relationship with Alexa...", a term I'm confident he wouldn't use to describe her typing the same things into Google. You could maybe make the argument that it's different because people ask Alexa things they wouldn't just search for, but that potentially interesting distinction is unexplored by the author.

I'm not aware of anything covering this, but I think there's some interesting potential looking into how humans see technology as more human if they can communicate with it in a human way, regardless of whether or not it otherwise displays aspects of humanity. Generative AI falls into this category too I think. People view it as way more intelligent than it actually is because you can sort of converse with it like a human.

replies(2): >>44085314 #>>44085772 #
2. blendo ◴[] No.44085314[source]
I’ve already gotten to the point where I talk into my iPhone rather than type for many interactions.

I think Apple cannot currently associate my apple id with my queries.

3. LoganDark ◴[] No.44085772[source]
> I'm not aware of anything covering this, but I think there's some interesting potential looking into how humans see technology as more human if they can communicate with it in a human way, regardless of whether or not it otherwise displays aspects of humanity. Generative AI falls into this category too I think. People view it as way more intelligent than it actually is because you can sort of converse with it like a human.

I don't know why, but certain types of people seem easily fooled into thinking that LLMs really are like a real person. I have to imagine that either these people don't actually need things that currently only a real person can provide, or they're just happy enough with what the LLM spits out to be unable to tell the difference.

Which doesn't make any sense to me, because whenever I talk to an LLM, I can pretty easily tell that it's nowhere close to a real person. As an example, I never use LLMs for conversation, because speaking to one is not in any way fulfilling to me like speaking to another real person is. I usually use LLMs for creative writing instead, but they're terrible at things that haven't already been exactly seen in their training data. They're not nearly as generalizable as the media would have you believe. All they can do is spit out sentences that look like sentences from real stories, they don't actually have any conception of the story or visualization of the scene that they then can describe like a person would. They don't actually simulate any of the story or imagine anything like I do.

I have to wonder if the people who are so fooled by LLMs are just non-autistic people. If non-autistic brains work based on patterns rather than how I work based on strict logic, that could explain why something that appears to show patterns of a person would then be perceived as a person to them.

But I dunno, that suggests that non-autistic people are somehow generally simpler/dumber than autistics and I wouldn't want to just assume that.