←back to thread

549 points orcul | 1 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
jebarker ◴[] No.41891228[source]
> What this tells us for AI is that we need something else besides LLMs

Not to over-hype LLMs, but I don't see why this results says this. AI doesn't need to do things the same way as evolved intelligence has.

replies(7): >>41891277 #>>41891338 #>>41891540 #>>41891547 #>>41891924 #>>41892032 #>>41898302 #
awongh ◴[] No.41891547[source]
One reason might that LLMs are successful because of the architecture, but also, just as importantly because they can be trained over a volume and diversity of human thought that’s encapsulated in language (that is on the internet). Where are we going to find the equivalent data set that will train this other kind of thinking?

Open AI O1 seems to be trained on mostly synthetic data, but it makes intuitive sense that LLMs work so well because we had the data lying around already.

replies(4): >>41891903 #>>41892004 #>>41892641 #>>41892690 #
jebarker ◴[] No.41891903[source]
I think the data is way more important for the success of LLMs than the architecture although I do think there's something important in the GPT architecture in particular. See this talk for why: [1]

Warning, watch out for waving hands: The way I see it is that cognition involves forming an abstract representation of the world and then reasoning about that representation. It seems obvious that non-human animals do this without language. So it seems likely that humans do too and then language is layered on top as a turbo boost. However, it also seems plausible that you could build an abstract representation of the world through studying a vast amount of human language and that'll be a good approximation of the real-world too and furthermore it seems possible that reasoning about that abstract representation can take place in the depths of the layers of a large transformer. So it's not clear to me that we're limited by the data we have or necessarily need a different type of data to build a general AI although that'll likely help build a better world model. It's also not clear that an LLM is incapable of the type of reasoning that animals apply to their abstract world representations.

[1] https://youtu.be/yBL7J0kgldU?si=38Jjw_dgxCxhiu7R

replies(2): >>41893306 #>>41893977 #
tsimionescu ◴[] No.41893306[source]
> However, it also seems plausible that you could build an abstract representation of the world through studying a vast amount of human language and that'll be a good approximation of the real-world too and furthermore it seems possible that reasoning about that abstract representation can take place in the depths of the layers of a large transformer.

While I agree this is possible, I don't see why you'd think it's likely. I would instead say that I think it's unlikely.

Human communication relies on many assumptions of a shared model of the world that are rarely if ever discussed explicitly, and without which certain concepts or at least phrases become ambiguous or hard to understand.

replies(1): >>41893943 #
necovek ◴[] No.41893943[source]
GP argument seems to be about "thinking" when restricted to knowledge through language, and "possible" is not the same as "likely" or "unlikely" — you are not really disagreeing, since either means "possible".
replies(1): >>41894055 #
tsimionescu ◴[] No.41894055[source]
GP said plausible, which does mean likely. It's possible that there's a teapot in orbit around Jupiter, but it's not plausible. And GP is specifically saying that by studying human language output, you could plausibly learn about the world that have birth to the internal models that language is used to exteriorize.
replies(1): >>41894171 #
1. necovek ◴[] No.41894171[source]
If we are really nitpicking, they said it's plausible you could build an abstract representation of the world by studying language-based data, but that it's possible it could be made to effectively reason too.

Anyway, it seems to me we are generally all in agreement (in this thread, at least), but are now being really picky about... language :)