←back to thread

549 points orcul | 2 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
dboreham ◴[] No.41895796[source]
Not sure about that. The same abstract model could be used for both (symbols generated in sequence). For language the symbols have meaning in the context of language. For non-language thought they don't. Nature seems to work this way in general: re-using/purposing the same underlying mechanism over and over at different levels in the stack. All of this could be a fancy version of very old hardware that had the purpose of controlling swimming direction in fish. Each symbol is a flick of the tail.
replies(1): >>41895841 #
exe34 ◴[] No.41895841[source]
I like to think of the non-verbal portions as the biological equivalents of ASICs. even skills like riding a bicycle might start out as conscious effort (a vision model, a verbal intention to ride and a reinforcement learning teacher) but is then replaced by a trained model to do the job without needing the careful intentional planning. some of the skills in the bag of tricks are fine tuned by evolution.

ultimately, there's no reason that a general algorithm couldn't do the job of a specific one, just less efficiently.

replies(1): >>41898955 #
1. winwang ◴[] No.41898955[source]
I mean, the QKV part of transformers is like an "ASIC" ... well, for an (approximate) lookup table.

(also important to note that NNs/LLMs operate on... abstract vectors, not "language" -- not relevant as a response to your post though).

replies(1): >>41901631 #
2. exe34 ◴[] No.41901631[source]
actually I think you are on to something - abstract vectors are the tokens of thought - mentalese if you've read any Dennett.