←back to thread

549 points orcul | 1 comments | | HN request time: 0.369s | source
Show context
YeGoblynQueenne ◴[] No.41891901[source]
>> They’re basically the first model organism for researchers studying the neuroscience of language. They are not a biological organism, but until these models came about, we just didn’t have anything other than the human brain that does language.

I think this is completely wrong-headed. It's like saying that until cars came about we just didn't have anything other than animals that could move around under its own power, therefore in order to understand how animals move around we should go and study cars. There is a great gulf of unsubstantiated assumptions between observing the behaviour of a technological artifact, like a car or a statistical language model, and thinking we can learn something useful from it about human or more generally animal faculties.

I am really taken aback that this is a serious suggestion: study large language models as in-silico models of human linguistic ability. Just putting it down in writing like that rings alarm bells all over the place.

replies(1): >>41895496 #
upghost ◴[] No.41895496[source]
I've been trying to figure out to respond to this for a while. I appreciate the fact that you are pretty much the lone voice on this thread voicing this opinion, which I also share but tend to keep my mouth shut since it seems to be unpopular.

It's hard for me to understand where my peers are coming from on the other side of this argument and respond without being dismissive, so I'll do my best to steelman the argument later.

Machine learning models are function approximators and by definition do not have an internal experience distinct from the training data any more than the plus operator does. I agree with the sentiment that even putting it in writing gives more weight to the position than it should, bordering on absurdity.

I suppose this is like the ELIZA phenomena on steroids, is the only thing I can think of for why such notions are being entertained.

However, to be generous, lets do some vigorous hand waving and say we could find a way to have an embodied learning agent gather sublinguistic perceptual data in an online reinforcement learning process, and furthermore that the (by definition) non-quantifiable subjective experience data could somehow be extracted, made into a training set, and fit to a nicely parametric loss function.

The idea then is that could find some architecture that would allow you to fit a model to the data.

And voila, machine consciousness, right? A perfect model for sentience.

Except for the fact that you would need to ignore that in the RL model gathering the data and the NN distilled from it, even with all of our vigorous hand waving, you are once again developing function approximators that have no subjective internal experience distinct from the training data.

Let's take it one step further. The absolute simplest form of learning comes in the form of habituation and sensitization to stimuli. Even microbes have the ability to do this.

LLMs and other static networks do not. You can attempt to attack this point by fiatting online reinforcement learning or dismissing it as unnecessary, but I should again point out that you would be attacking or dismissing the bare minimum requirement for learning, let alone a higher order subjective internal experience.

So then the argument, proceeding from false premises, would claim that the compressed experience in the NN could contain mechanical equivalents of higher order internal subjective experiences.

So even with all the might vigorous hand waving we have allowed, you have at best found a way to convert internal subjective processes to external mechanical processes fit to a dataset.

The argument would then follow, well, what's the difference? And I could point back to the microbe, but if the argument hasn't connected by this point, we will be chasing our tails forever.

A good book on the topic that examines this in much greater depth is "The Self Assembling Brain".

https://a.co/d/1FwYxaJ

That being said, I am hella jealous of the VC money that the grifters will get for advancing the other side of this argument.

For enough money I'd probably change my tune too. I can't by a loaf of bread with a good argument lol

replies(1): >>41897141 #
cognitif ◴[] No.41897141[source]
What does consciousness or subjective experience have to do with the relationship between language and cognition? I’m not following your argument.
replies(2): >>41903143 #>>41905463 #
1. upghost ◴[] No.41903143[source]
tl;dr furbies