←back to thread

277 points gk1 | 1 comments | | HN request time: 0.246s | source
Show context
gavinray ◴[] No.44398869[source]
The identity crisis bit was both amusing and slightly worrying.
replies(1): >>44399097 #
gausswho ◴[] No.44399097[source]
The article claimed Claudius wasn't having a go for April Fools - that it claimed to be doing so after the fact as a means of explaining (excusing?) its behavior. Given what I understand about LLMs and intent, I'm unsure how they could be so certain.
replies(1): >>44400033 #
tough ◴[] No.44400033[source]
its a wourd soup machine

llm's have no -world models- can't reason about truth or lies. only encyclopedic repeating facts.

all the tricks CoT, etc, are just, well tricks, extended yapping simulating thought and understanding.

AI can give great replies, if you give it great prompts, because you activate the tokens that you're interested with.

if you're lost in the first place, you'll get nowhere

for Claude, continuing the text with making up a story about being April fools, sounds the most plausible reasonable output given its training weights

replies(1): >>44405532 #
1. gausswho ◴[] No.44405532[source]
But why is the conclusion that Claudius is 'making up a story about being April Fools'? Maybe this wasn't an identity crisis, just a big human whoosh?