←back to thread

337 points mooreds | 1 comments | | HN request time: 0.207s | source
Show context
izzydata ◴[] No.44484180[source]
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
replies(10): >>44484210 #>>44484226 #>>44484229 #>>44484355 #>>44484381 #>>44484384 #>>44484386 #>>44484439 #>>44484454 #>>44484478 #
1. colechristensen ◴[] No.44484381[source]
>I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence.

Measuring intelligence is hard and requires a really good definition of intelligence, LLMs have in some ways made the definition easier because now we can ask the concrete question against computers which are very good at some things "Why are LLMs not intelligent?" Given their capabilities and deficiencies, answering the question about what current "AI" technology lacks will make us better able to define intelligence. This is assuming that LLMs are the state of the art Million Monkeys and that intelligence lies on a different path than further optimizing that.

https://en.wikipedia.org/wiki/Infinite_monkey_theorem