←back to thread

549 points orcul | 1 comments | | HN request time: 0.199s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
CSMastermind ◴[] No.41892068[source]
When you look at how humans play chess they employ several different cognitive strategies. Memorization, calculation, strategic thinking, heuristics, and learned experience.

When the first chess engines came out they only employed one of these: calculation. It wasn't until relatively recently that we had computer programs that could perform all of them. But it turns out that if you scale that up with enough compute you can achieve superhuman results with calculation alone.

It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks even if there are things humans do which they can't.

The other thing I'd point out is that all language is essentially synthetic training data. Humans invented language as a way to transfer their internal thought processes to other humans. It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct.

replies(6): >>41892323 #>>41892362 #>>41892675 #>>41893389 #>>41893580 #>>41895058 #
nox101 ◴[] No.41892362[source]
It sounds like you think this research is wrong? (it claims llms can not reason)

https://arstechnica.com/ai/2024/10/llms-cant-perform-genuine...

or do you maybe think no logical reasoning is needed to do everything a human can do? Tho humans seem to be able to do logical reasoning

replies(3): >>41892408 #>>41892707 #>>41892803 #
bbor ◴[] No.41892803[source]
I’ll pop in with a friendly “that research is definitely wrong”. If they want to prove that LLMs can’t reason, shouldn’t they stringently define that word somewhere in their paper? As it stands, they’re proving something small (some of today’s LLMs have XYZ weaknesses) and claiming something big (humans have an ineffable calculator-soul).

LLMs absolutely 100% can reason, if we take the dictionary definition; it’s trivial to show their ability to answer non-memorized questions, and the only way to do that is some sort of reasoning. I personally don’t think they’re the most efficient tool for deliberative derivation of concepts, but I also think any sort of categorical prohibition is anti-scientific. What is the brain other than a neural network?

Even if we accept the most fringe, anthropocentric theories like Penrose & Hammerhoff’s quantum tubules, that’s just a neural network with fancy weights. How could we possibly hope to forbid digital recreations of our brains from “truly” or “really” mimicking them?

replies(4): >>41893179 #>>41893265 #>>41893282 #>>41893782 #
tsimionescu ◴[] No.41893282[source]
> Even if we accept the most fringe, anthropocentric theories like Penrose & Hammerhoff’s quantum tubules, that’s just a neural network with fancy weights.

First, while it is a fringe idea with little backing it, it's far from the most fringe.

Secondly, it is not at all known that animal brains are accurately modeled as an ANN, any more so than any other Turing-compatible system can be modeled as an ANN. Biological neurons are themselves small computers, like all living cells in general, with not fully understood capabilities. The way biological neurons are connected is far more complex than a weight in an ANN. And I'm not talking about fantasy quantum effects in microtubules, I'm talking about well-established biology, with many kinds of synapses, some of which are "multicast" in a spatially distinct area instead of connected to specific neurons. And about the non-neuronal glands which are known to change neuron behavior and so on.

How critical any of these differences are to cognition is anyone's guess at this time. But dismissing them and reducing the brain to a bigger NN is not wise.

replies(2): >>41893426 #>>41894649 #
adrianN ◴[] No.41893426[source]
It is my understanding that Penrose doesn’t claim that brains are needed for cognition, just that brains are needed for a somewhat nebulous „conscious experience“, which need not have any observable effects. I think that it’s fairly uncontroversial that a machine can produce behavior that is indistinguishable from human intelligence over some finite observation time. The Chinese room speaks Chinese, even if it lacks understanding for some definitions of the term.
replies(1): >>41893950 #
jstanley ◴[] No.41893950[source]
But conscious experience does produce observable effects.

For that not to be the case, you'd have to take the position that humans experience consciousness and they talk about consciousness but that there is no causal link between the two! It's just a coincidence that the things you find yourself saying about consciousness line up with your internal experience?

https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies-zo...

replies(1): >>41893995 #
adrianN ◴[] No.41893995[source]
That philosophers talk about p-zombies seems like evidence to me that at least some of them don't believe that consciousness needs to have observable effects that can't be explained without consciousness. I don't say that I believe that too. I don't believe that there is anything particularly special about brains.
replies(2): >>41894429 #>>41895399 #
mannykannot ◴[] No.41895399[source]
The p-zombie argument is the best-known of a group of conceivability arguments, which ultimately depend on the notion that if a proposition is conceivably true, then there is a metaphysically possible world in which it is true. Skeptics suppose that this is just a complicated way of equivocating over what 'conceivable' means, and even David Chalmers, the philosopher who has done the most to bring the p-zombie argument to wide attention, acknowledges that it depends on the assumption of what he calls 'perfect conceivability', which is tantamount to irrefutable knowledge.

To deal with the awkwardly apparent fact that consciousness certainly seems to have physical effects, zombiephiles challenge the notion that physics is causally closed, so that it is conceivable that something non-physical can cause physical effects. Their approach is to say that the causal closure of physics is not provable, but at this point, the argument has become a lexicographical one, about the definition of the words 'physics' and 'physical' (if one insists that 'physical' does not refer to a causally-closed concept, then we still need a word for the causal closure within which the physical is embedded - but that's just what a lot of people take 'physical' to mean in the first place.) None of the anti-physicalists have been able, so far, to shed any light on how the mind is causally effective in the physical world.

You might be interested in the late Daniel Dennett's "The Unimagined Preposterousness of Zombies": https://dl.tufts.edu/concern/pdfs/6m312182x

replies(1): >>41898241 #
1. lanstin ◴[] No.41898241[source]
Like what is magic - it turns out to be the ability to go from interior thoughts to stuff happening in the shared world - physics is just the mechanism of the particular magical system we have.