ChatGPT can produce output that sounds very much like a person, albeit often an obviously computerized person. The typical layperson doesn't know that this is merely the emulation of text formation, and not actual cognition.
Once I've explained to people who are worried about what AI could represent that current generative AI models are effectively just text autocomplete but a billion times more complex, and that they don't actually have any capacity to think or reason (even though they often sound like they do).
It also doesn't help that any sort of "machine learning" is now being referred to as "AI" for buzzword/marketing purposes, muddying the waters even further.
I don’t consider myself an AI doomer by any means, but I also don’t find arguments of the flavor “it just predicts the next word, no need to worry” to be convincing. It’s not like Hitler had Einstein level intellect (and it’s also not clear that these systems won’t be able to reach Einstein level intellect in the future either.) Similarly, Covid certainly does not have consciousness but was dangerous. And a chimpanzee that is billions of times more sophisticated than usual chimps would be concerning. Things don’t have to be exactly like us to pose a threat.
It's definitely not dangerous in the sense of reaching true intelligence/consciousness that would be a threat to us or force us to face the ethics of whether AI deserves dignity, freedom, etc.
It's very dangerous in the sense in that it will be just "good enough" to replace human labor with so that we all end up with shitter customer service, education, medical care, etc. so that the top 0.1% can get richer.
And you're right, it's also dangerous in the sense that responsibilty for evil acts will be laundered to it.