←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
475 points zdw | 1 comments | | HN request time: 0.204s | source
Show context
ants_everywhere ◴[] No.44485225[source]
> I am baffled that the AI discussions seem to never move away from treating a function to generate sequences of words as something that resembles a human.

This is such a bizarre take.

The relation associating each human to the list of all words they will ever say is obviously a function.

> almost magical human-like powers to something that - in my mind - is just MatMul with interspersed nonlinearities.

There's a rich family of universal approximation theorems [0]. Combining layers of linear maps with nonlinear cutoffs can intuitively approximate any nonlinear function in ways that can be made rigorous.

The reason LLMs are big now is that transformers and large amounts of data made it economical to compute a family of reasonably good approximations.

> The following is uncomfortably philosophical, but: In my worldview, humans are dramatically different things than a function . For hundreds of millions of years, nature generated new versions, and only a small number of these versions survived.

This is just a way of generating certain kinds of functions.

Think of it this way: do you believe there's anything about humans that exists outside the mathematical laws of physics? If so that's essentially a religious position (or more literally, a belief in the supernatural). If not, then functions and approximations to functions are what the human experience boils down to.

[0] https://en.wikipedia.org/wiki/Universal_approximation_theore...

replies(5): >>44485574 #>>44486015 #>>44487960 #>>44488003 #>>44495590 #
1. Awisvamya ◴[] No.44495590[source]
> do you believe there's anything about humans that exists outside the mathematical laws of physics?

I don't.

The point is not that we, humans, cannot arrange physical matter such that it have emergent properties just like the human brain.

The point is that we shouldn't.

Does responsibility mean anything to these people posing as Evolution?

Nobody's personally responsible for what we've evolved into; evolution has simply happened. Nobody's responsible for the evolutionary history that's carried in and by every single one of us. And our psychology too has been formed by (the pressures of) evolution, of course.

But if you create an artificial human, and create it from zero, then all of its emergent properties are on you. Can you take responsibility for that? If something goes wrong, can you correct it, or undo it?

I don't consider our current evolutionary state "scripture", so we certainly tweak, one way or another, aspects that we think deserve tweaking. To me, it boils down to our level of hubris. Some of our "mistaken tweaks" are now visible at an evolutionary scale, too; for a mild example, our jaws have been getting smaller (leaving less room for our teeth) due to our bad up diet (thanks, agriculture). But worse than that, humans have been breeding plants, animals, modifying DNA left and right, and so on -- and they've summarily failed to take responsibility for their atrocious mistakes.

Thus, I have zero trust in, and zero hope for, assholes who unabashedly aim to create artificial intelligence knowing full well that such properties might emerge that we'd have to call artificial psyche. Anyone taking this risk is criminally reckless, in my opinion.

It's not that humans are necessarily unable to create new sentient beings. Instead: they shouldn't even try! Because they will inevitably fuck it up, bringing about untold misery; and they won't be able to contain the damage.