←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.354s | source
Show context
ants_everywhere ◴[] No.44485225[source]
> I am baffled that the AI discussions seem to never move away from treating a function to generate sequences of words as something that resembles a human.

This is such a bizarre take.

The relation associating each human to the list of all words they will ever say is obviously a function.

> almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities.

There's a rich family of universal approximation theorems [0]. Combining layers of linear maps with nonlinear cutoffs can intuitively approximate any nonlinear function in ways that can be made rigorous.

The reason LLMs are big now is that transformers and large amounts of data made it economical to compute a family of reasonably good approximations.

> The following is uncomfortably philosophical, but: In my worldview, humans are dramatically different things than a function . For hundreds of millions of years, nature generated new versions, and only a small number of these versions survived.

This is just a way of generating certain kinds of functions.

Think of it this way: do you believe there's anything about humans that exists outside the mathematical laws of physics? If so that's essentially a religious position (or more literally, a belief in the supernatural). If not, then functions and approximations to functions are what the human experience boils down to.

[0] https://en.wikipedia.org/wiki/Universal_approximation_theore...

replies(5): >>44485574 #>>44486015 #>>44487960 #>>44488003 #>>44495590 #
xtal_freq ◴[] No.44487960[source]
Not that this is your main point, but I find this take representative, “do you believe there's anything about humans that exists outside the mathematical laws of physics?”There are things “about humans”, or at least things that our words denote, that are outside physic’s explanatory scope. For example, the experience of the colour red cannot be known, as an experience, by a person who only sees black and white. This is the case no matter what empirical propositions, or explanatory system, they understand.
replies(2): >>44488436 #>>44490010 #
1. ants_everywhere ◴[] No.44490010[source]
This idea is called qualia [0] for those unfamiliar.

I don't have any opinion on the qualia debates honestly. I suppose I don't know what it feels like for an ant to find a tasty bit of sugar syrup, but I believe it's something that can be described with physics (and by extension, things like chemistry).

But we do know some things about some qualia. Like we know how red light works, we have a good idea about how photoreceptors work, etc. We know some people are red-green colorblind, so their experience of red and green are mushed together. We can also have people make qualia judgments and watch their brains with fMRI or other tools.

I think maybe an interesting question here is: obviously it's pleasurable to animals to have their reward centers activated. Is it pleasurable or desirable for AIs to be rewarded? Especially if we tell them (as some prompters do) that they feel pleasure if they do things well and pain if they don't? You can ask this sort of question for both the current generation of AIs and future generations.

[0] https://en.wikipedia.org/wiki/Qualia