←back to thread

336 points mooreds | 1 comments | | HN request time: 0.211s | source
Show context
A_D_E_P_T ◴[] No.44484248[source]
See also: Dwarkesh's Question

> https://marginalrevolution.com/marginalrevolution/2025/02/dw...

> "One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.

> "Shouldn’t we be expecting that kind of stuff?"

I basically agree and think that the lack of answers to this question constitutes a real problem for people who believe that AGI is right around the corner.

replies(4): >>44484392 #>>44484448 #>>44484503 #>>44485534 #
1. IAmGraydon ◴[] No.44485534[source]
This is precisely the question I’ve been asking, and the lack of an answer makes me think that this entire thing is one very elaborate, very convincing magic trick. LLMs can be better thought of search engines with a very intuitive interface for all existing, publicly available human knowledge rather than actually intelligent. I think all of the big players know this, and are feeding the illusion to extract as much cash as possible before the farce becomes obvious.