←back to thread

3337 points keepamovin | 1 comments | | HN request time: 0.197s | source
Show context
hn_throwaway_99 ◴[] No.46213179[source]
This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.
replies(4): >>46213931 #>>46215177 #>>46215634 #>>46219940 #
zwnow ◴[] No.46215177[source]
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
replies(1): >>46215398 #
block_dagger ◴[] No.46215398[source]
This common response is pretty uninteresting and misleading. They simply predict tokens? Oh. What does the brain do, exactly?
replies(3): >>46215589 #>>46215987 #>>46216410 #
adammarples ◴[] No.46216410[source]
We don't how
replies(1): >>46221772 #
1. digbybk ◴[] No.46221772[source]
I guarantee that once we do know people will start appending the word “just” to the explanation. Complex behaviors emerge from simple components. Knowing that doesn’t make the emergence any more incredible.