←back to thread

336 points mooreds | 9 comments | | HN request time: 0.608s | source | bottom
1. A_D_E_P_T ◴[] No.44484248[source]
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

replies(4): >>44484392 #>>44484448 #>>44484503 #>>44485534 #
2. vessenes ◴[] No.44484392[source]
I think gwern gave a good hot take on this: it’s super rare for humans to do this; it might just be moving the chains to complain the ai can’t.
replies(1): >>44485597 #
3. luckydata ◴[] No.44484448[source]
Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.
replies(1): >>44484768 #
4. hackinthebochs ◴[] No.44484503[source]
> "Shouldn’t we be expecting that kind of stuff?"

https://x.com/robertghrist/status/1841462507543949581

5. A_D_E_P_T ◴[] No.44484768[source]
What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.

6. IAmGraydon ◴[] No.44485534[source]
This is precisely the question I’ve been asking, and the lack of an answer makes me think that this entire thing is one very elaborate, very convincing magic trick. LLMs can be better thought of search engines with a very intuitive interface for all existing, publicly available human knowledge rather than actually intelligent. I think all of the big players know this, and are feeding the illusion to extract as much cash as possible before the farce becomes obvious.
7. IAmGraydon ◴[] No.44485597[source]
No, it’s really not that rare. There are new scientific discoveries all time, and all from people who don’t have the advantage of having the entire corpus of human knowledge in their heads.
replies(1): >>44489631 #
8. vessenes ◴[] No.44489631{3}[source]
To be clear the “this” is a knowledge based “aha” that comes from integrating information from various fields of study or research and applying that to make a new invention / discovery.

This isn’t that common even among billions of humans. Most discoveries tend to be random or accidental even in the lab. Or are the result of massive search processes, like drug development.

replies(1): >>44489793 #
9. LegionMammal978 ◴[] No.44489793{4}[source]
Regardless of goalposts, I'd imagine that a persistent lack of "intuitive-discovery-ability" would put a huge dent in the "nigh-unlimited AI takeoff" narrative that so many people are pushing. In such a scenario, AI might be able to optimize the search processes quite a bit, but the search would still be bottlenecked by available resources, and ultimately suffer from diminishing returns, instead of the oft-predicted accelerating returns.