←back to thread

549 points orcul | 3 comments | | HN request time: 0.014s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
jll29 ◴[] No.41894031[source]
Most pre-deep learning architectures had separate modules like "language model", "knowledge base" and "inference component".

Then LLMs came along, and ML folks got rather too excited that they contain implicit knowledge (which, of course, is required to deal with ambiguity). Then the new aspiration as "all in one" and "bigger is better", not analyzing what components are needed and how to orchestrate their interplay.

From an engineering (rather than science) point of view, the "end-to-end black box" approach is perhaps misguided, because the result will be a non-transparent system by definition. Individual sub-models should be connected in a way that retains control (e.g. in dialog agents, SRI's Open Agent Architecture was a random example of such "glue" to tie components together, to name but one).

Regarding the science, I do believe language adds to the power of thinking; while (other) animals can of course solve simple problems without language, language permits us to define layers of abstractions (by defining and sharing new concepts) that goes beyond simple, non-linguistic thoughts. Programming languages (created by us humans somewhat in the image of human language) and the language of mathematics are two examples where we push this even further (beyond the definition of new named concepts, to also define new "DSL" syntax) - but all of these could not come into beying without human language: all formal specs and all axioms are ultimately and can only be formulated in human language. So without language, we would likely be stuck at a very simple point of development, individually and collectively.

EDIT: 2 typos fixed

replies(3): >>41894475 #>>41895223 #>>41896691 #
lolinder ◴[] No.41895223[source]
> I do believe language adds to the power of thinking; while (other) animals can of course solve simple problems without language, language permits us to define layers of abstractions (by defining and sharing new concepts) that goes beyond simple, non-linguistic thoughts.

Based on my experience with toddlers, a rather smart dog, and my own thought processes, I disagree that language is a fundamental component of abstraction. Of sharing abstractions, sure, but not developing them.

When I'm designing a software system I will have a mental conception of the system as layered abstractions before I have a name for any component. I invent names for these components in order to define them in the code or communicate them to other engineers, but the intuition for the abstraction comes first. This is why "naming things" is one of the hard problems in computer science—because the name comes second as a usually-inadequate attempt to capture the abstraction in language.

replies(1): >>41896928 #
calf ◴[] No.41896928[source]
The conception here is that one's layered abstractions is basically an informal mathematics... which is formally structured... which is a formal grammar. It's your internal language, using internal symbols instead of English names.

Remember in CS theory, a language is just a set of strings. If you think in pictures that is STILL a language if your pictures are structured.

So I'm really handwaving the above just to suggest that it all depends on the assumptions that each expert is making in elucidating this debate which has a long history.

replies(1): >>41897411 #
1. JumpCrisscross ◴[] No.41897411{3}[source]
> conception here is that one's layered abstractions is basically an informal mathematics... which is formally structured... which is a formal grammar. It's your internal language, using internal symbols instead of English names

Unless we're getting metaphysical to the point of describing quantum systems as possessig a language, there are various continuous analog systems that can compute without a formal grammar. The language system could be the one that thinks in discrete 'tokens'; the conscious system something more complex.

replies(1): >>41899993 #
2. calf ◴[] No.41899993[source]
That's based on a well known fallacy, because analog models cannot exceed the computational power of Turing machines. The alternative position is Penrose who thinks quantum tubules are responsible for consciousness and thus somehow more powerful than TMs.
replies(1): >>41905962 #
3. JumpCrisscross ◴[] No.41905962[source]
> analog models cannot exceed the computational power of Turing machines

There is no reason to assume consciousness is Turing computable [1].

[1] https://en.m.wikipedia.org/wiki/Church%E2%80%93Turing_thesis