←back to thread

334 points mooreds | 1 comments | | HN request time: 0.341s | source
Show context
izzydata ◴[] No.44484180[source]
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
replies(10): >>44484210 #>>44484226 #>>44484229 #>>44484355 #>>44484381 #>>44484384 #>>44484386 #>>44484439 #>>44484454 #>>44484478 #
breuleux ◴[] No.44484454[source]
I think the issue is going to turn out to be that intelligence doesn't scale very well. The computational power needed to model a system has got to be in some way exponential in how complex or chaotic the system is, meaning that the effectiveness of intelligence is intrinsically constrained to simple and orderly systems. It's fairly telling that the most effective way to design robust technology is to eliminate as many factors of variation as possible. That might be the only modality where intelligence actually works well, super or not.
replies(1): >>44484546 #
1. airstrike ◴[] No.44484546[source]
What does "scale well" mean here? LLMs right now aren't intelligent so we're not scaling from that point on.

If we had a very inefficient, power hungry machine that was 1:1 as intelligent as a human being but could scale it very inefficiently to be 100:1 a human being it might still be worth it.