Most active commenters
  • zwnow(3)

←back to thread

3338 points keepamovin | 17 comments | | HN request time: 1.622s | source | bottom
1. hn_throwaway_99 ◴[] No.46213179[source]
This is awesome, but minor quibble with the title - "hallucinates" is the wrong verb here. You specifically asked it to make up a 10-year-in-the-future HN frontpage, and that's exactly what it did. "Hallucinates" means when it randomly makes stuff up but purports it to be the truth. If some one asks me to write a story for a creative writing class, and I did, you wouldn't say I "hallucinated" the story.
replies(4): >>46213931 #>>46215177 #>>46215634 #>>46219940 #
2. sankalpkotewar ◴[] No.46213931[source]
"Predicts"
3. zwnow ◴[] No.46215177[source]
If someone asked you, you would know about the context. LLMs are predictors, no matter the context length, they never "know" what they are doing. They simply predict tokens.
replies(1): >>46215398 #
4. block_dagger ◴[] No.46215398[source]
This common response is pretty uninteresting and misleading. They simply predict tokens? Oh. What does the brain do, exactly?
replies(3): >>46215589 #>>46215987 #>>46216410 #
5. zwnow ◴[] No.46215589{3}[source]
The brain has intrinsic understanding of the world engraved in our DNA. We do not simply predict tokens based on knowledge, we base our thoughts on intelligence, emotions and knowledge. LLMs neither have intelligence nor emotions. If your brain simply predicts tokens I feel sorry for you.

Edit: really does not surprise me that AI bros downvote this. Expecting to understand human values from people that want to make themselves obsolete was a mistake.

replies(2): >>46216625 #>>46217202 #
6. navane ◴[] No.46215634[source]
It so very weird to see this called "hallucinate", as we all have more or less used it for "made up erroneously".

Is this a push to override the meaning and erase the hallucination critique?

replies(1): >>46217120 #
7. wafflemaker ◴[] No.46215987{3}[source]
It does exactly the same, predicts tokens, but it's totally different and superior to LLMs /s

OTOH, brain tokens seem to be concept based and not always linguistic (many people think solely in images/concepts).

replies(2): >>46216725 #>>46219733 #
8. adammarples ◴[] No.46216410{3}[source]
We don't how
replies(1): >>46221772 #
9. pseidemann ◴[] No.46216625{4}[source]
> The brain has intrinsic understanding of the world engraved in our DNA.

This is not correct. The DNA encodes learning mechanisms shaped by evolution. But there is no "Wikipedia" about the world in the DNA. The DNA is shaped by the process of evolution, and is not "filled" by seemingly random information.

replies(1): >>46216944 #
10. ricardobeat ◴[] No.46216725{4}[source]
LLMs are “concept based” too, if you can call statistical patterns that. In a multi-modal model the embeddings for text, image and audio exist in the same high-dimensional space.

We don’t seem to have any clue if this is how our brain works, yet.

11. zwnow ◴[] No.46216944{5}[source]
> But there is no "Wikipedia" about the world in the DNA.

Im surprised as to how you got to that conclusion by my wording. I never claimed u have something like a knowledge base in ur DNA...

replies(1): >>46217464 #
12. randomtoast ◴[] No.46217120[source]
At some point, no matter how something is mentioned, someone will offer criticism. I guess that in roughly 20% of all HN front page posts, at least one person comments on the terminology used. I do not see this as an argument against using accurate terminology, but rather as a reminder that it is impossible to meet everyone's expectations.

There are other terms that are similarly controversial, such as "thinking models". When you describe an LLM as "thinking", it often triggers debate because people interpret the term differently and bring their own expectations and assumptions into the discussion.

13. Timwi ◴[] No.46217202{4}[source]
I'm not an AI bro and I downvoted mostly because of the addendum.
14. pseidemann ◴[] No.46217464{6}[source]
It's your first sentence. The one I have quoted.
15. KalMann ◴[] No.46219733{4}[source]
> It does exactly the same, predicts tokens,

That is an absolutely wild claim you've made. You're being way to presumptious.

16. dang ◴[] No.46219940[source]
(I should have thought of this yesterday but have just replaced 'hallucinates' with 'imagines' in the title...though one could object to that too...)
17. digbybk ◴[] No.46221772{4}[source]
I guarantee that once we do know people will start appending the word “just” to the explanation. Complex behaviors emerge from simple components. Knowing that doesn’t make the emergence any more incredible.