Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.
It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?
replies(1):