←back to thread

Death by AI

(davebarry.substack.com)
583 points ano-ther | 1 comments | | HN request time: 0.212s | source
Show context
jongjong ◴[] No.44620146[source]
Maybe it's the a genuine problem with AI that it can only hold one idea, one possible version of reality at any given time. Though I guess many humans have the same issue. I first heard of this idea from Peter Thiel when he described what he looks for in a founder. It seems increasingly relevant to our social structure that the people and systems who make important decisions are able to hold multiple conflicting ideas without ever fully accepting one or the other. Conflicting ideas create decision paralysis of varying degrees which is useful at times. It seems like an important feature to implement into AI.

It's interesting that LLMs produce each output token as probabilities but it appears that in order to generate the next token (which is itself expressed as a probability), it has to pick a specific word as the last token. It can't just build more probabilities on top of previous probabilities. It has to collapse the previous token probabilities as it goes?

replies(1): >>44620411 #
herval ◴[] No.44620411[source]
I'm not sure that's the case, and it's quite easily proven - if you ask an LLM any question, then doubt their response, they'll change their minds and offer a different interpretation. It's an indication they hold multiple interpretations, depending on how you ask, otherwise they'd dig in.

You can also see decision paralysis in action if you implement CoT - it's common to see the model "pondering" about a bunch of possible options before picking one.

replies(1): >>44640512 #
1. jongjong ◴[] No.44640512[source]
That's an interesting framing but I'd still contend that an LLM doesn't seem to hold both ideas 'at the same time' because it will answer confidently in both cases. It depends on the input; it will go one way or the other. It doesn't seem to consider and weigh up all of its knowledge when answering.