ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.
Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).
It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.
As a mere software engineer who's made a few (pre-transformer) AI models, I can't tell you what "actual cognition" is in a way that differentiates from "here's a huge bunch of mystery linear algebra that was loosely inspired by a toy model of how neurons work".
I also can't tell you if qualia is or isn't necessary for "actual cognition".
(And that's despite that LLMs are definitely not thinking like humans, due to being in the order of at least a thousand times less complex by parameter count; I'd agree that if there is something that it's like to be an LLM, 'human' isn't it, and their responses make a lot more sense if you model them as literal morons that spent 2.5 million years reading the internet than as even a normal human with Wikipedia search).