←back to thread

549 points orcul | 1 comments | | HN request time: 0.203s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
jebarker ◴[] No.41891228[source]
> What this tells us for AI is that we need something else besides LLMs

Not to over-hype LLMs, but I don't see why this results says this. AI doesn't need to do things the same way as evolved intelligence has.

replies(7): >>41891277 #>>41891338 #>>41891540 #>>41891547 #>>41891924 #>>41892032 #>>41898302 #
awongh ◴[] No.41891547[source]
One reason might that LLMs are successful because of the architecture, but also, just as importantly because they can be trained over a volume and diversity of human thought that’s encapsulated in language (that is on the internet). Where are we going to find the equivalent data set that will train this other kind of thinking?

Open AI O1 seems to be trained on mostly synthetic data, but it makes intuitive sense that LLMs work so well because we had the data lying around already.

replies(4): >>41891903 #>>41892004 #>>41892641 #>>41892690 #
Animats ◴[] No.41892641[source]
> One reason might that LLMs are successful because of the architecture, but also, just as importantly because they can be trained over a volume and diversity of human thought that’s encapsulated in language (that is on the internet). Where are we going to find the equivalent data set that will train this other kind of thinking?

Probably by putting simulated animals into simulated environments where they have to survive and thrive.

Working at animal level is uncool, but necessary for progress. I had this argument with Rod Brooks a few decades back. He had some good artificial insects, and wanted to immediately jump to human level, with a project called Cog.[1] I asked him why he didn't go for mouse level AI next. He said "Because I don't want to go down in history as the inventor of the world's greatest artificial mouse."

Cog was a dud, and Brooks goes down in history as the inventor of the world's first good robotic vacuum cleaner.

[1] https://en.wikipedia.org/wiki/Cog_(project)

replies(1): >>41893222 #
at_a_remove ◴[] No.41893222[source]
"Where are we going to find the equivalent data set that will train this other kind of thinking?"

Just a personal opinion, but in my shitty When H.A.R.L.I.E. Was One (and others) unpublished fiction pastiche (ripoff, really), I had the nascent AI stumble upon Cyc as its base for the world and "thinking about how to think."

I never thought that Cyc was enough, but I do think that something Cyc-like is necessary as a component, a seed for growth, until the AI begins to make the transition from the formally defined, vastly interrelated frames and facts in Cyc to being able to growth further and understand the much less formal knowledgebase you might find in, say Wikipedia.

Full agreement with your animal model is only sensible. If you think about macaques, they have a limited range of vocalization once they hit adulthood. Noe that the mothers almost never make a noise at their babies. Lacking language, when a mother wants to train an infant, she hurts it. (Shades of Blindsight there) She picks up the infant, grasps it firmly, and nips at it. The baby tries to get away, but the mother holds it and keeps at it. Their communication is pain. Many animals do this. But they also learn threat displays, the promise of pain, which goes beyond mere carrot and stick.

The more sophisticated multicellular animals (let us say birds, reptiles, mammals) have to learn to model the behavior of other animals in their environment: to prey on them, to avoid being prey. A pond is here. Other animals will also come to drink. I could attack them and eat them. And with the macaques, "I must scare the baby and pain it a bit because I no longer want to breastfeed it."

Somewhere along the line, modeling other animals (in-species or out-species) hits some sort of self-reflection and the recursion begins. That, I think, is a crucial loop to create the kind of intelligence we seek. Here I nod to Egan's Diaspora.

Looping back to your original point about the training data, I don't think that loop is sufficient for an AGI to do anything but think about itself, and that's where something like Cyc would serve as a framework for it to enter into the knowledge that it isn't merely cogito ergo summing in a void, but that it is part of a world with rules stable enough that it might reason, rather than "merely" statistically infer. And as part of the world (or your simulated environment), it can engage in new loops, feedback between its actions and results.

replies(2): >>41893698 #>>41893712 #
sokoloff ◴[] No.41893712[source]
> A pond is here. Other animals will also come to drink. I could attack them and eat them.

Is that the dominant chain, or is the simpler “I’ve seen animals here before that I have eaten” or “I’ve seen animals I have eaten in a place that smelled/looked/sounded/felt like this” sufficient to explain the behavior?

replies(1): >>41898577 #
1. at_a_remove ◴[] No.41898577[source]
Could be! But then there are ambushes, driving prey into the claws of hidden allies, and so forth. Modeling the behavior of other animals will have to occur without place for many instances.