←back to thread

549 points orcul | 2 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
CSMastermind ◴[] No.41892068[source]
When you look at how humans play chess they employ several different cognitive strategies. Memorization, calculation, strategic thinking, heuristics, and learned experience.

When the first chess engines came out they only employed one of these: calculation. It wasn't until relatively recently that we had computer programs that could perform all of them. But it turns out that if you scale that up with enough compute you can achieve superhuman results with calculation alone.

It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks even if there are things humans do which they can't.

The other thing I'd point out is that all language is essentially synthetic training data. Humans invented language as a way to transfer their internal thought processes to other humans. It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct.

replies(6): >>41892323 #>>41892362 #>>41892675 #>>41893389 #>>41893580 #>>41895058 #
1. threeseed ◴[] No.41892675[source]
> It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance

To some extent this is true.

To calculate A + B you could for example generate A, B for trillions of combinations and encode that within the network. And it would calculate this faster than any human could.

But that's not intelligence. And Apple's research showed that LLMs are simply inferring relationships based on the tokens it has access to. Which you can throw off by adding useless information or trying to abstract A + B.

replies(1): >>41893161 #
2. Dylan16807 ◴[] No.41893161[source]
> To calculate A + B you could for example generate A, B for trillions of combinations and encode that within the network. And it would calculate this faster than any human could.

I don't feel like this is a very meaningful argument because if you can do that generation then you must already have a superhuman machine for that task.