←back to thread

333 points mooreds | 6 comments | | HN request time: 0.667s | source | bottom
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
richardw ◴[] No.44485483[source]
They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.
replies(1): >>44486058 #
jhanschoo ◴[] No.44486058[source]
Why do you say that LLMs are not truth seekers? If I express an informational query not very well, the LLM will infer what I mean by it and address the possible well-posed information queries that I may have intended that I did not express well.

Can that not be considered truth-seeking, with the agent-environment boundary being the prompt box?

replies(3): >>44486100 #>>44486263 #>>44487215 #
1. chychiu ◴[] No.44486100[source]
They are not intrinsically truth seekers, and any truth seeking behaviour is mostly tuned during the training process.

Unfortunately it also means it can be easily undone. E.g. just look at Grok in its current lobotomized version

replies(1): >>44486253 #
2. jhanschoo ◴[] No.44486253[source]
> They are not intrinsically truth seekers

Is the average person a truth seeker in this sense that performs truth-seeking behavior? In my experience we prioritize sharing the same perspectives and getting along well with others a lot more than a critical examination of the world.

In the sense that I just expressed, of figuring out the intention of a user's information query, that really isn't a tuned thing, it's inherent in generative models from possessing a lossy, compressed representation of training data, and it is also truth-seeking practiced by people that want to communicate.

replies(2): >>44486864 #>>44487321 #
3. imbnwa ◴[] No.44486864[source]
>Is the average person a truth seeker in this sense that performs truth-seeking behavior?

Absolutely

4. graealex ◴[] No.44487321[source]
You are completely missing the argument that was made to underline the claim.

If ChatGPT claims arsenic to be a tasty snack, nothing happens to it.

If I claim the same, and act upon it, I die.

replies(2): >>44487407 #>>44488110 #
5. cornel_io ◴[] No.44487407{3}[source]
If ChatGPT claims arsenic to be a tasty snack, OpenAI adds a p0 eval and snuffs that behavior out of all future generations of ChatGPT. Viewed vaguely in faux genetic terms, the "tasty arsenic gene" has been quickly wiped out of the population, never to return.

Evolution is much less brutal and efficient. To you death matters a lot more than being trained to avoid a response does to ChatGPT, but from the point of view of the "tasty arsenic" behavior, it's the same.

6. jhanschoo ◴[] No.44488110{3}[source]
You are right. I have ignored completely the context in the phrasing "truth seeker" was made, given my own wrong interpretation to the phrase, and I in fact agree with the comment I was responding to that they "work with the lens on our reality that is our text output".