- trying to sell something
- high on their own stories
- high on exogenous compounds
- all of the above
LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.
Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".
The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.