Most active commenters
  • Davidzheng(4)
  • impossiblefork(3)

←back to thread

197 points baylearn | 23 comments | | HN request time: 0.616s | source | bottom
1. bestouff ◴[] No.44471877[source]
Are there some people here in HN believing in AGI "soonish" ?
replies(5): >>44471902 #>>44471982 #>>44472003 #>>44472071 #>>44472107 #
2. BriggyDwiggs42 ◴[] No.44471902[source]
I could see 2040 or so being very likely. Not off transformers though.
replies(1): >>44472185 #
3. bdhcuidbebe ◴[] No.44471982[source]
Theres usually some enlightened laymen in this kind of topic.
replies(1): >>44476554 #
4. PicassoCTs ◴[] No.44472003[source]
St. Fermi says no
5. impossiblefork ◴[] No.44472071[source]
I might, depending on the definition.

Some kind of verbal-only-AGI that can solve almost all mathematical problems that humans come up with that can be solved in half a page. I think that's achievable somewhere in the near term, 2-7 years.

replies(2): >>44472097 #>>44473375 #
6. deergomoo ◴[] No.44472097[source]
Is that “general” though? I’ve always taken AGI to mean general to any problem.
replies(2): >>44472144 #>>44472261 #
7. Davidzheng ◴[] No.44472107[source]
what's your definition? AGI original definition is median human across almost all fields which I believe is basically achieved. If superhuman (better than best expert) I expect <2030 for all nonrobotic tasks and <2035 for all tasks
replies(3): >>44472644 #>>44473009 #>>44473060 #
8. Touche ◴[] No.44472144{3}[source]
Yes, general means you can present it a new problem that there is no data on, and it can become a expert o that problem.
9. serf ◴[] No.44472185[source]
via what paradigm then? What out there gives high enough confidence to set a date like that?
replies(1): >>44475952 #
10. impossiblefork ◴[] No.44472261{3}[source]
I suppose not.

Things I think will be hard for LLMs to do, which some humans can: you get handed 500 pages of Geheimschreiber encrypted telegraph traffic and infinite paper, and you have to figure out how the cryptosystem works and how to decrypt the traffic. I don't think that can happen. I think it requires a highly developed pattern recognition ability together with an ability to not get lost, which LLM-type things will probably continue to for a long time.

But if they could maths more fully, then pretty much all carefully defined tasks would be in reach if they weren't too long.

With regard to what Touche brings up in the other response to your comment, I think that it might be possible to get them to read up on things though-- go through something, invent problems, try to solve those. I think this is something that could be done today with today's models with no real special innovation, but which just hasn't been made into a service yet. But this of course doesn't address that criticism, since it assumes the availability of data.

11. jltsiren ◴[] No.44472644[source]
Your "original definition" was always meaningless. A "Hello, World!" program is equally capable in most jobs as the median human. On the other hand, if the benchmark is what the median human can reasonably become (a professional with decades of experience), we are still far from there.
replies(1): >>44472836 #
12. Davidzheng ◴[] No.44472836{3}[source]
I agree with second part but not the first (far in capability not in timeline). I think you underestimate the distance of median wihout training and "hello world" in many economically meaningful jobs.
13. GolfPopper ◴[] No.44473009[source]
A "median human" can run a web search and report back on what they found without making stuff up, something I've yet to find an LLM capable of doing reliably.
replies(2): >>44473597 #>>44476492 #
14. gnz11 ◴[] No.44473060[source]
How are you coming to the conclusion that "median human" is "basically achieved"? Current AI has no means of understanding and synthesizing new ideas the way a human would. It's all generative.
replies(1): >>44473626 #
15. whiplash451 ◴[] No.44473375[source]
What makes you think that this could be achieved in that time frame? All we seem to have for now are LLMs that can solve problems they’ve learned by heart (or neighboring problems)
replies(1): >>44475522 #
16. Davidzheng ◴[] No.44473597{3}[source]
I bet you median humans make up a nontrivial amount of things. Humans misremember all the time. If you ask for only quotes, LLMs can also do this without problems (I use o3 for search over google)
replies(1): >>44478627 #
17. Davidzheng ◴[] No.44473626{3}[source]
synthesizing new ideas: in order to express the idea in our language it basically means you have some new combinations of existing building blocks, just sometimes the building blocks are low level enough and the combination is esoteric enough. It's a spectrum again. I think current models are in fact quite capable of combining existing ideas and building blocks in new ways (this is how human innovation also happens). Most of my evidence comes from asking newer models o3/gemini-2.5-pro for research-level mathematics questions which do not appear in existing literature but is of course connected with them.

so these arguments by fundamental distinctions I believe all cannot work--the question is how new are the AI contributions. Nowadays there's of course still no theoretical breakthroughs in mathematics from AI (though biology could be close!). Also I think the AIs have understanding--but tbf the only thing we can test is through testing on tricky questions which I think support my side. Though of course some of these questions have interpretations which are not testable--so I don't want to argue about those.

18. impossiblefork ◴[] No.44475522{3}[source]
Transformers can actually learn pretty difficult manipulations, even how to calculate difficult integrals, so I don't agree that they can only solve problems they've learned by heart.

The reason I believe it can be achieved in this time frame is that I believe that you can do much more with non-output tokens than is currently being done.

19. BriggyDwiggs42 ◴[] No.44475952{3}[source]
While we don’t know an enormous amount about the brain, we do know a pretty good bit about individual neurons, and I think it’s a good guess, given current science, to say that a solidly accurate simulation of a large number of neurons would lead to a kind of intelligence loosely analogous to that found in animals. I’d completely understand if you disagree, but I consider it a good guess.

If that’s the case, then the gulf between current techniques and what’s needed seems knowable. A means of approximating continuous time between neuron firing, time-series recognition in inputs, learning behavior on inputs prior to actual neuron firing (akin to behavior of dendrites), etc. are all missing functionalities in current techniques. Some or all of these missing parts of biological neuron behavior might be needed to approximate animal intelligence, but I think it’s a good guess that these are the parts that are missing.

AI currently has enormous amounts of money being dumped into it on techniques that are lacking for what we want to achieve with it. As they falter more and more, there will be an enormous financial interest in creating new, more effective techniques, and the most obvious place to look for inspiration will be biology. That’s why I think it’s likely to happen in the next few decades; the hardware should be there in terms of raw compute, there’s an obvious place to look for new ideas, and there’s a ton of financial interest in it.

replies(1): >>44476643 #
20. ekianjo ◴[] No.44476492{3}[source]
maybe you havent been exposed to actual median humans much.
21. snoman ◴[] No.44476554[source]
Like Geoffrey Hinton, who predicts 5-20 years (though with low confidence)?
22. m11a ◴[] No.44476643{4}[source]
It's not clear to me that these approaches aren't already being tried.

Firstly, by some researchers in the big labs (some of which I'm sure are funded to try random moonshot bets like the above), at non-product labs working on hard problems (eg World Labs), and especially within academia where researchers have taken inspiration from biology before, and today are even better funded and hungry for new discoveries.

Certainly at my university, some researchers are slightly detached from the hype cycle of NeurIPS publications and are trying interdisciplinary approaches to bigger problems. Though, admittedly less than I'd have hoped for). I do think the pressure to be a paper machine limits people from trying bets that are realistically very likely to fail.

23. imtringued ◴[] No.44478627{4}[source]
Ah the classic "humans are fallible, AI is fallible, therefore AI is exactly like human intelligence".

I guess if you believe this, then the AI is already smarter than you.