←back to thread

549 points orcul | 1 comments | | HN request time: 0.001s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
jll29 ◴[] No.41894031[source]
Most pre-deep learning architectures had separate modules like "language model", "knowledge base" and "inference component".

Then LLMs came along, and ML folks got rather too excited that they contain implicit knowledge (which, of course, is required to deal with ambiguity). Then the new aspiration as "all in one" and "bigger is better", not analyzing what components are needed and how to orchestrate their interplay.

From an engineering (rather than science) point of view, the "end-to-end black box" approach is perhaps misguided, because the result will be a non-transparent system by definition. Individual sub-models should be connected in a way that retains control (e.g. in dialog agents, SRI's Open Agent Architecture was a random example of such "glue" to tie components together, to name but one).

Regarding the science, I do believe language adds to the power of thinking; while (other) animals can of course solve simple problems without language, language permits us to define layers of abstractions (by defining and sharing new concepts) that goes beyond simple, non-linguistic thoughts. Programming languages (created by us humans somewhat in the image of human language) and the language of mathematics are two examples where we push this even further (beyond the definition of new named concepts, to also define new "DSL" syntax) - but all of these could not come into beying without human language: all formal specs and all axioms are ultimately and can only be formulated in human language. So without language, we would likely be stuck at a very simple point of development, individually and collectively.

EDIT: 2 typos fixed

replies(3): >>41894475 #>>41895223 #>>41896691 #
1. visarga ◴[] No.41896691[source]
> the "end to end black box" approach is perhaps misguided, because the result will be a non transparent system by definition

A black box that works in human language and can be investigated with perturbations, embedding visualizations and probes. It explains itself as much ore more than we can.