> I would love to read a study on why people so readily believe and trust in AI chatbots.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.