←back to thread

549 points orcul | 1 comments | | HN request time: 0s | source
Show context
Animats ◴[] No.41890003[source]
This is an important result.

The actual paper [1] says that functional MRI (which is measuring which parts of the brain are active by sensing blood flow) indicates that different brain hardware is used for non-language and language functions. This has been suspected for years, but now there's an experimental result.

What this tells us for AI is that we need something else besides LLMs. It's not clear what that something else is. But, as the paper mentions, the low-end mammals and the corvids lack language but have some substantial problem-solving capability. That's seen down at squirrel and crow size, where the brains are tiny. So if someone figures out to do this, it will probably take less hardware than an LLM.

This is the next big piece we need for AI. No idea how to do this, but it's the right question to work on.

[1] https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...

replies(35): >>41890104 #>>41890470 #>>41891063 #>>41891228 #>>41891262 #>>41891383 #>>41891507 #>>41891639 #>>41891749 #>>41892068 #>>41892137 #>>41892518 #>>41892576 #>>41892603 #>>41892642 #>>41892738 #>>41893400 #>>41893534 #>>41893555 #>>41893732 #>>41893748 #>>41893960 #>>41894031 #>>41894713 #>>41895796 #>>41895908 #>>41896452 #>>41896476 #>>41896479 #>>41896512 #>>41897059 #>>41897270 #>>41897757 #>>41897835 #>>41905326 #
theptip ◴[] No.41891262[source]
LLM as a term is becoming quite broad; a multi-modal transformer-based model with function calling / ReAct finetuning still gets called an LLM, but this scaffolding might be all that’s needed.

I’d be extremely surprised if AI recapitulates the same developmental path as humans did; evolution vs. next-token prediction on an existing corpus are completely different objective functions and loss landscapes.

replies(1): >>41891539 #
fhdsgbbcaA ◴[] No.41891539[source]
I asked both OpenAI and Claude the same difficult programming question. Each gave a nearly identical response down to the variable names and example values.

I then looked it up and they had each copy/pasted the same Stack overflow answer.

Furthermore, the answer was extremely wrong, the language I used was superficially similar to the source material, but the programming concepts were entirely different.

What this tells me is there is clearly no “reasoning” happening whatsoever with either model, despite marketing claiming as such.

replies(4): >>41891778 #>>41891817 #>>41892309 #>>41893601 #
ninetyninenine ◴[] No.41893601[source]
>What this tells me is there is clearly no “reasoning” happening whatsoever with either model, despite marketing claiming as such.

Not true. You yourself have failed at reasoning here.

The problem with your logic is that you failed to identify the instances where LLMs have succeeded with reasoning. So if LLMs both fail and succeed it just means that LLMs are capable of reasoning and capable of being utterly wrong.

It's almost cliche at this point. Tons of people see the LLM fail and ignore the successes then they openly claim from a couple anecdotal examples that LLMs can't reason period.

Like how is that even logical? You have contradictory evidence therefore the LLM must be capable of BOTH failing and succeeding in reason. That's the most logical answer.

replies(2): >>41897497 #>>41898867 #
fhdsgbbcaA ◴[] No.41898867{3}[source]
Claim is LLM exhibit reasoning, particularly in coding and logic. Observation is mere parroting of training data. Observations trump claims.
replies(1): >>41899279 #
ninetyninenine ◴[] No.41899279{4}[source]
Read the parent post. The claim is LLMs can't reason.

The evidence is using one instance of the LLM parroting training data while completely ignoring contradicting evidence where the LLM created novel answers to novel prompts out of thin air.

>Observations trump claims.

No. The same irrational hallucinations that plague LLMs are plaguing human reasoning and trumping rational thinking.

replies(1): >>41899454 #
1. fhdsgbbcaA ◴[] No.41899454{5}[source]
Must be my lying’ eyes, fooling me once again.