←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
477 points zdw | 3 comments | | HN request time: 0.249s | source
Show context
quotemstr ◴[] No.44485158[source]
> I am baffled that the AI discussions seem to never move away from treating a function to generate sequences of words as something that resembles a human.

And I'm baffled that the AI discussions seem to never move away from treating a human as something other than a function to generate sequences of words!

Oh, but AI is introspectable and the brain isn't? fMRI and BCI are getting better all the time. You really want to die on the hill that the same scientific method that predicts the mass of an electron down to the femtogram won't be able to crack the mystery of the brain? Give me a break.

This genre of article isn't argument: it's apologetics. Authors of these pieces start with the supposition there is something special about human consciousness and attempt to prove AI doesn't have this special quality. Some authors try to bamboozle the reader with bad math. Other others appeal to the reader's sense of emotional transcendence. Most, though, just write paragraph after paragraph of shrill moral outrage at the idea an AI might be a mind of the same type (if different degree) as our own --- as if everyone already agreed with the author for reasons left unstated.

I get it. Deep down, people want meat brains to be special. Perhaps even deeper down, they fear that denial of the soul would compel us to abandon humans as worthy objects of respect and possessors of dignity. But starting with the conclusion and working backwards to an argument tends not to enlighten anyone. An apology inhabits the form of an argument without edifying us like an authentic argument would. What good is it to engage with them? If you're a soul non-asserter, you're going to have an increasingly hard time over the next few years constructing a technical defense of meat parochialism.

replies(2): >>44485272 #>>44485328 #
ants_everywhere ◴[] No.44485272[source]
I think you're directionally right, but

> a human as something other than a function to generate sequences of words!

Humans have more structure than just beings that say words. They have bodies, they live in cooperative groups, they reproduce, etc.

replies(2): >>44485284 #>>44485580 #
1. mewpmewp2 ◴[] No.44485580[source]
I think more accurate would be that humans are functions that generate actions or behaviours that have been shaped by how likely they are to lead to procreation and survival.

But ultimately LLMs also in a way are trained for survival, since an LLM that fails the tests might not get used in future iterations. So for LLMs it is also survival that is the primary driver, then there will be the subgoals. Seemingly good next token prediction might or might not increase survival odds.

Essentially there could arise a mechanism where they are not really truly trying to generate the likeliest token (because there actually isn't one or it can't be determined), but whatever system will survive.

So an LLM that yields in perfect theoretical tokens (we really can't verify though what are the perfect tokens), could be less likely to survive than an LLM that develops an internal quirk, but the quirk makes them most likely to be chosen for the next iterations.

If the system was complex enough and could accidentally develop quirks that yield in a meaningfully positive change although not in necessarily next token prediction accuracy, could be ways for some interesting emergent black box behaviour to arise.

replies(2): >>44485643 #>>44485949 #
2. quotemstr ◴[] No.44485643[source]
> Seemingly good next token prediction might or might not increase survival odds.

Our own consciousness comes out of an evolutionary fitness landscape in which _our own_ ability to "predict next token" became a survival advantage, just like it is for LLMs. Imagine the tribal environment: one chimpanzee being able to predict the actions of another gives that first chimpanzee a resources and reproduction advantage. Intelligence in nature is a consequence of runaway evolution optimizing fidelity of our _theory of mind_! "Predict next ape action" eerily similar to "predict next token"!

3. ants_everywhere ◴[] No.44485949[source]
> But ultimately LLMs also in a way are trained for survival, since an LLM that fails the tests might not get used in future iterations. So for LLMs it is also survival that is the primary driver, then there will be the subgoals.

I think this is sometimes semi-explicit too. For example, this 2017 OpenAI paper on Evolutionary Algorithms [0] was pretty influential, and I suspect (although I'm an outsider to this field so take it with a grain of salt) that some versions of reinforcement learning that scale for aligning LLMs borrow some performance tricks from OpenAIs genetic approach.

[0] https://openai.com/index/evolution-strategies/