Large Language Models (LLMs), lacking true comprehension of the underlying concepts, convert sequences of text into numerical vectors known as tokens. Using a prediction engine together with user input, attempt to predict the next token in the sequence. As such - it's all hallucinations.