←back to thread

549 points orcul | 2 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
CSMastermind ◴[] No.41892068[source]
When you look at how humans play chess they employ several different cognitive strategies. Memorization, calculation, strategic thinking, heuristics, and learned experience.

When the first chess engines came out they only employed one of these: calculation. It wasn't until relatively recently that we had computer programs that could perform all of them. But it turns out that if you scale that up with enough compute you can achieve superhuman results with calculation alone.

It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks even if there are things humans do which they can't.

The other thing I'd point out is that all language is essentially synthetic training data. Humans invented language as a way to transfer their internal thought processes to other humans. It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct.

replies(6): >>41892323 #>>41892362 #>>41892675 #>>41893389 #>>41893580 #>>41895058 #
nox101 ◴[] No.41892362[source]
It sounds like you think this research is wrong? (it claims llms can not reason)

https://arstechnica.com/ai/2024/10/llms-cant-perform-genuine...

or do you maybe think no logical reasoning is needed to do everything a human can do? Tho humans seem to be able to do logical reasoning

replies(3): >>41892408 #>>41892707 #>>41892803 #
bbor ◴[] No.41892803[source]
I’ll pop in with a friendly “that research is definitely wrong”. If they want to prove that LLMs can’t reason, shouldn’t they stringently define that word somewhere in their paper? As it stands, they’re proving something small (some of today’s LLMs have XYZ weaknesses) and claiming something big (humans have an ineffable calculator-soul).

LLMs absolutely 100% can reason, if we take the dictionary definition; it’s trivial to show their ability to answer non-memorized questions, and the only way to do that is some sort of reasoning. I personally don’t think they’re the most efficient tool for deliberative derivation of concepts, but I also think any sort of categorical prohibition is anti-scientific. What is the brain other than a neural network?

Even if we accept the most fringe, anthropocentric theories like Penrose & Hammerhoff’s quantum tubules, that’s just a neural network with fancy weights. How could we possibly hope to forbid digital recreations of our brains from “truly” or “really” mimicking them?

replies(4): >>41893179 #>>41893265 #>>41893282 #>>41893782 #
1. visarga ◴[] No.41893179[source]
Chasing our own tail with concepts like "reasoning". Let's move the concept a bit - "search". Can LLMs search for novel ideas and discoveries? They do under the right circumstances. You got to provide idea testing environments, the missing ingredient. Search and learn, it's what humans do and AI can do as well.

The whole issue with "reasoning" is that is an incompletely defined concept. Over what domain, what problem space, and what kind of experimental access do we define "reasoning"? Search is better as a concept because it comes packed with all these things, and without conceptual murkiness. Search is scientifically studied to a greater extent.

I don't think we doubt LLMs can learn given training data, we already accuse them of being mere interpolators or parrots. And we can agree to some extent the LLMs can recombine concepts correctly. So they got down the learning part.

And for the searching part, we can probably agree its a matter of access to the search space not AI. It's an environment problem, and even a social one. Search is usually more extended than the lifetime of any agent, so it has to be a cultural process, where language plays a central role.

When you break reasoning/progress/intelligence into "search and learn" it becomes much more tractable and useful. We can also make more grounded predictions on AI, considering the needs for search that are implied, not just the needs for learning.

How much search did AlphaZero need to beat us at go? How much search did humans pack in our 200K years history over 10,000 generations? What was the cost of that journey of search? That kind of questions. In my napkin estimations we solved 1:10000 of the problem by learning, search is 10000x to a million times harder.

replies(1): >>41893284 #
2. shkkmo ◴[] No.41893284[source]
You can't breakdown cognition into just "search" and "learn" without either ridiculously overloading those concepts or leaving a ton out.