←back to thread

334 points mooreds | 1 comments | | HN request time: 0.236s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
richardw ◴[] No.44485483[source]
They’re great at working with the lens on our reality that is our text output. They are not truth seekers, which is necessarily fundamental to every life form from worms to whales. If we get things wrong, we die. If they get them wrong, they earn 1000 generated tokens.
replies(1): >>44486058 #
jhanschoo ◴[] No.44486058[source]
Why do you say that LLMs are not truth seekers? If I express an informational query not very well, the LLM will infer what I mean by it and address the possible well-posed information queries that I may have intended that I did not express well.

Can that not be considered truth-seeking, with the agent-environment boundary being the prompt box?

replies(3): >>44486100 #>>44486263 #>>44487215 #
1. richardw ◴[] No.44487215[source]
Right now you’re putting in unrequested effort to get to an answer. Nobody is driving you to do this, you’re motivated to get the answer. At some point you’ll be satisfied, or you might give up because you have other things you want to do, more.

An LLM is primarily trying to generate content. It’ll throw the best tokens in there but it won’t lose any sleep if they’re suboptimal. It just doesn’t seek. It won’t come back an hour later and say “you know, I was thinking…”

I had one frustrating conversation with ChatGPT where I kept asking it to remove a tie from a picture it generated. It kept saying “done, here’s the picture without the tie”, but the tie was still there. Repeatedly. Or it’ll generate a reference or number that is untrue but looks approximately correct. If you did that you’d be absolutely mortified and you’d never do it again. You’d feel shame and a deep desire to be seen as someone who does it properly. It doesn’t have any such drive. Zero fucks given, training finished months ago.