←back to thread

549 points orcul | 2 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
CSMastermind ◴[] No.41892068[source]
When you look at how humans play chess they employ several different cognitive strategies. Memorization, calculation, strategic thinking, heuristics, and learned experience.

When the first chess engines came out they only employed one of these: calculation. It wasn't until relatively recently that we had computer programs that could perform all of them. But it turns out that if you scale that up with enough compute you can achieve superhuman results with calculation alone.

It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks even if there are things humans do which they can't.

The other thing I'd point out is that all language is essentially synthetic training data. Humans invented language as a way to transfer their internal thought processes to other humans. It makes sense that the process of thinking and the process of translating those thoughts into and out of language would be distinct.

replies(6): >>41892323 #>>41892362 #>>41892675 #>>41893389 #>>41893580 #>>41895058 #
PaulDavisThe1st ◴[] No.41892323[source]
> It's not clear to me that LLMs sufficiently scaled won't achieve superhuman performance on general cognitive tasks

If "general cognitive tasks" means "I give you a prompt in some form, and you give me an incredible response of some form " (forms may differ or be the same) then it is hard to disagree with you.

But if by "general cognitive task" you mean "all the cognitive things that human do", then it is really hard to see why you would have any confidence that LLMs have any hope of achieving superhuman performance at these things.

replies(1): >>41893022 #
1. jhrmnn ◴[] No.41893022[source]
Even in cognitive tasks expressed via language, something like a memory feels necessary. At which point it’s not a LLM as in a generic language model. It would become a language model conditioned on the memory state.
replies(1): >>41893745 #
2. ddingus ◴[] No.41893745[source]
More than a memory.

Needs to be a closed loop, running on its own.

We get its attention, and it responds, or frankly if we did manage any sort of sentience, even a simulation of it, then the fact is it may not respond.

To me, that is the real test.