←back to thread

549 points orcul | 1 comments | | HN request time: 0.2s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
jebarker ◴[] No.41891228[source]
> What this tells us for AI is that we need something else besides LLMs

Not to over-hype LLMs, but I don't see why this results says this. AI doesn't need to do things the same way as evolved intelligence has.

replies(7): >>41891277 #>>41891338 #>>41891540 #>>41891547 #>>41891924 #>>41892032 #>>41898302 #
awongh ◴[] No.41891547[source]
One reason might that LLMs are successful because of the architecture, but also, just as importantly because they can be trained over a volume and diversity of human thought that’s encapsulated in language (that is on the internet). Where are we going to find the equivalent data set that will train this other kind of thinking?

Open AI O1 seems to be trained on mostly synthetic data, but it makes intuitive sense that LLMs work so well because we had the data lying around already.

replies(4): >>41891903 #>>41892004 #>>41892641 #>>41892690 #
1. nickpsecurity ◴[] No.41892690[source]
I always start with God’s design thinking it is best. That’s our diverse, mixed-signal, brain architecture followed by a good upbringing. That means we need to train brain-like architectures in the same way we train children. So, we’ll need whatever data they needed. Multiple streams for different upbringings, too.

The data itself will be most senses collecting raw data about the world most of the day for 18 years. It might require a camera on the kid’s head which I don’t like. I think people letting a team record their life is more likely. Split the project up among many families running in parallel, 1-4 per grade/year. It would probably cost a few million a year.

(Note: Parent changes might require an integration step during AI training or showing different ones in the early years.)

The training system would rapidly scan this information in. It might not be faster than human brains. If it is, we can create them quickly. That’s the passive learning part, though.

Human training involves asking lots of questions based on internal data, random exploration (esp play) with reinforcement, introspection/meditation, and so on. Self-driven, generative activities whose outputs become inputs into the brain system. This training regiment will probably need periodic breaks from passive learning to ask questions or play which requires human supervision.

Enough of this will probably produce… disobedient, unpredictable children. ;) Eventually, we’ll learn how to do AI parenting where the offspring are well-behaved, effective servants. Those will be fine-tuned for practical applications. Later, many more will come online which are trained by different streams of life experience, schooling methods, etc.

That was my theory. I still don’t like recording people’s lives to train AI’s. I just thought it was the only way to build brain-like AI’s and likely to happen (see Twitch).

My LLM concept was to do the same thing with K-12 education resources, stories, kids games, etc. Parents already could tell us exactly what to use to gradually build them up since they did that for their kids year by year. Then, several career tracts layering different college books and skill areas. I think it would be cheaper than GPT-4 with good performance.