←back to thread

334 points mooreds | 2 comments | | HN request time: 0.001s | source
Show context
A_D_E_P_T ◴[] No.44484248[source]
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

replies(4): >>44484392 #>>44484448 #>>44484503 #>>44485534 #
1. luckydata ◴[] No.44484448[source]
Well this statement is simply not true. Agent systems based on LLMs have made original discoveries on their own, see the work Deep Mind has done on pharmaceutical discovery.
replies(1): >>44484768 #
2. A_D_E_P_T ◴[] No.44484768[source]
What results have they delivered?

I recall the recent DeepMind material science paper debacle. "Throw everything against the wall and hope something sticks (and that nobody bothers to check the rest)" is not a great strategy.

I also think that Dwarkesh was referring to LLMs specifically. Much of what DeepMind is doing is somewhat different.