←back to thread

333 points mooreds | 1 comments | | HN request time: 0.411s | source
Show context
raspasov ◴[] No.44485275[source]
Anyone who claims that a poorly definined concept, AGI, is right around the corner is most likely:

- trying to sell something

- high on their own stories

- high on exogenous compounds

- all of the above

LLMs are good at language. They are OK summarizers of text by design but not good at logic. Very poor at spatial reasoning and as a result poor at connecting concepts together.

Just ask any of the crown jewel LLM models "What's the biggest unsolved problem in the [insert any] field".

The usual result is a pop-science-level article but with ton of subtle yet critical mistakes! Even worse, the answer sounds profound on the surface. In reality, it's just crap.

replies(12): >>44485480 #>>44485483 #>>44485524 #>>44485758 #>>44485846 #>>44485900 #>>44485998 #>>44486105 #>>44486138 #>>44486182 #>>44486682 #>>44493526 #
0x20cowboy ◴[] No.44486682[source]
LLM are a compressed version of their training dataset with a text based interactive search function.
replies(4): >>44486893 #>>44487019 #>>44487057 #>>44488479 #
echelon ◴[] No.44487057[source]
LLMs are useful in that respect. As are media diffusion models. They've compressed the physics of light, the rules of composition, the structure of prose, the knowledge of the internet, etc. and made it infinitely remixable and accessible to laypersons.

AGI, on the other hand, should really stand for Aspirationally Grifting Investors.

Superintelligence is not around the corner. OpenAI knows this and is trying to become a hyperscaler / Mag7 company with the foothold they've established and the capital that they've raised. Despite that, they need a tremendous amount of additional capital to will themselves into becoming the next new Google. The best way to do that is to sell the idea of superintelligence.

AGI is a grift. We don't even have a definition for it.

replies(4): >>44487277 #>>44489791 #>>44492231 #>>44492891 #
EGreg ◴[] No.44487277[source]
I an not an expert but I have a serious counterpoint.

While training LLMs to replicate the human output, the intelligence and understanding EMERGES in the internal layers.

It seems trivial to do unsupervised training on scientific data, for instance, such as star movements, and discover closed-form analytic models for their movements. Deriving Kepler’s laws and Newton’s equations should be fast and trivial, and by that afternoon you’d have much more profound models with 500+ variables which humans would struggle to understand but can explain the data.

AGI is what, Artificial General Intelligence? What exactly do we mean by general? Mark Twain said “we are all idiots, just on different subjects”. These LLMs are already better than 90% of humans at understanding any subject, in the sense of answering questions about that subject and carrying on meaningful and reasonable discussion. Yes occasionally they stumble or make a mistake, but overall it is very impressive.

And remember — if we care about practical outcomes - as soon as ONE model can do something, ALL COPIES OF IT CAN. So you can reliably get unlimited agents that are better than 90% of humans at understanding every subject. That is a very powerful baseline for replacing most jobs, isn’t it?

replies(3): >>44487716 #>>44488166 #>>44489665 #
1. ◴[] No.44487716[source]