←back to thread

334 points mooreds | 1 comments | | HN request time: 0.216s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
1. Davidzheng ◴[] No.44486138[source]
I agree with the last part but I think that criticism applies to many humans too so I don't find it compelling at all.

I also think by original definition (better than median human at almost all task) it's close and I think in the next 5 years it will be competitive with professionals at all tasks which are nonphysical (physical could be 5-10 years idk). I could be high on my own stories but not the rest.

LLMs are good at language yes but I think to be good at language requires some level of intelligence. I find this notion that they are bad at spatial reasoning extremely flawed. They are much better than all previous models, some of which are designed for spatial reasoning. Are they worse than humans? Yes but just the fact that you can put newer models on robots and they just work means that they are quite good by AI standards and rapidly improving.