←back to thread

251 points slyall | 3 comments | | HN request time: 0s | source
Show context
aithrowawaycomm ◴[] No.42060762[source]
I think there is a slight disconnect here between making AI systems which are smart and AI systems which are useful. It’s a very old fallacy in AI: pretending tools which assist human intelligence by solving human problems must themselves be intelligent.

The utility of big datasets was indeed surprising, but that skepticism came about from recognizing the scaling paradigm must be a dead end: vertebrates across the board require less data to learn new things, by several orders of magnitude. Methods to give ANNs “common sense” are essentially identical to the old LISP expert systems: hard-wiring the answers to specific common-sense questions in either code or training data, even though fish and lizards can rapidly make common-sense deductions about manmade objects they couldn’t have possibly seen in their evolutionary histories. Even spiders have generalization abilities seemingly absent in transformers: they spin webs inside human homes with unnatural geometry.

Again it is surprising that the ImageNet stuff worked as well as it did. Deep learning is undoubtedly a useful way to build applications, just like Lisp was. But I think we are about as close to AGI as we were in the 80s, since we have made zero progress on common sense: in the 80s we knew Big Data can poorly emulate common sense, and that’s where we’re at today.

replies(5): >>42061007 #>>42061232 #>>42068100 #>>42068802 #>>42070712 #
rjsw ◴[] No.42061232[source]
Maybe we just collectively decided that it didn't matter whether the answer was correct or not.
replies(1): >>42062623 #
1. aithrowawaycomm ◴[] No.42062623[source]
Again I do think these things have utility and the unreliability of LLMs is a bit incidental here. Symbolic systems in LISP are highly reliable, but they couldn’t possibly be extended to AGI without another component, since there was no way to get the humans out of the loop: someone had to assign the symbols semantic meaning and encode the LISP function accordingly. I think there’s a similar conceptual issue with current ANNs, and LLMs in particular: they rely on far too much formal human knowledge to get off the ground.
replies(2): >>42062668 #>>42065736 #
2. rjsw ◴[] No.42062668[source]
I meant more why the "boom caught almost everyone by surprise", people working in the field thought that correct answers would be important.
3. nxobject ◴[] No.42065736[source]
Barring a stunning discovery that will stop putting the responsibility for NN intelligence on synthetic training set – it looks like NN and symbolic AI may have to coexist, symbiotically.