Most active commenters
  • quotemstr(4)
  • dgfitz(4)

←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
477 points zdw | 16 comments | | HN request time: 2.389s | source | bottom
1. quotemstr ◴[] No.44485158[source]
> I am baffled that the AI discussions seem to never move away from treating a function to generate sequences of words as something that resembles a human.

And I'm baffled that the AI discussions seem to never move away from treating a human as something other than a function to generate sequences of words!

Oh, but AI is introspectable and the brain isn't? fMRI and BCI are getting better all the time. You really want to die on the hill that the same scientific method that predicts the mass of an electron down to the femtogram won't be able to crack the mystery of the brain? Give me a break.

This genre of article isn't argument: it's apologetics. Authors of these pieces start with the supposition there is something special about human consciousness and attempt to prove AI doesn't have this special quality. Some authors try to bamboozle the reader with bad math. Other others appeal to the reader's sense of emotional transcendence. Most, though, just write paragraph after paragraph of shrill moral outrage at the idea an AI might be a mind of the same type (if different degree) as our own --- as if everyone already agreed with the author for reasons left unstated.

I get it. Deep down, people want meat brains to be special. Perhaps even deeper down, they fear that denial of the soul would compel us to abandon humans as worthy objects of respect and possessors of dignity. But starting with the conclusion and working backwards to an argument tends not to enlighten anyone. An apology inhabits the form of an argument without edifying us like an authentic argument would. What good is it to engage with them? If you're a soul non-asserter, you're going to have an increasingly hard time over the next few years constructing a technical defense of meat parochialism.

replies(2): >>44485272 #>>44485328 #
2. ants_everywhere ◴[] No.44485272[source]
I think you're directionally right, but

> a human as something other than a function to generate sequences of words!

Humans have more structure than just beings that say words. They have bodies, they live in cooperative groups, they reproduce, etc.

replies(2): >>44485284 #>>44485580 #
3. quotemstr ◴[] No.44485284[source]
> Humans have more structure than just beings that say words. They have bodies, they live in cooperative groups, they reproduce, etc.

Yeah. We've become adequate at function-calling and memory consolidation.

4. dgfitz ◴[] No.44485328[source]
“ Determinism, in philosophy, is the idea that all events are causally determined by preceding events, leaving no room for genuine chance or free will. It suggests that given the state of the universe at any one time, and the laws of nature, only one outcome is possible.”

Clearly computers are deterministic. Are people?

replies(2): >>44485343 #>>44485769 #
5. quotemstr ◴[] No.44485343[source]
https://www.lesswrong.com/posts/bkr9BozFuh7ytiwbK/my-hour-of...

> Clearly computers are deterministic. Are people?

Give an LLM memory and a source of randomness and they're as deterministic as people.

"Free will" isn't a concept that typechecks in a materialist philosophy. It's "not even wrong". Asserting that free will exists is _isomorphic_ to dualism which is _isomorphic_ to assertions of ensoulment. I can't argue with dualists. I reject dualism a priori: it's a religious tenet, not a mere difference of philosophical opinion.

So, if we're all materialists here, "free will" doesn't make any sense, since it's an assertion that something other than the input to a machine can influence its output.

replies(3): >>44485535 #>>44488187 #>>44490691 #
6. dgfitz ◴[] No.44485535{3}[source]
As long as you realize you’re barking up a debate as old as time, I respect your opinion.
replies(1): >>44485621 #
7. mewpmewp2 ◴[] No.44485580[source]
I think more accurate would be that humans are functions that generate actions or behaviours that have been shaped by how likely they are to lead to procreation and survival.

But ultimately LLMs also in a way are trained for survival, since an LLM that fails the tests might not get used in future iterations. So for LLMs it is also survival that is the primary driver, then there will be the subgoals. Seemingly good next token prediction might or might not increase survival odds.

Essentially there could arise a mechanism where they are not really truly trying to generate the likeliest token (because there actually isn't one or it can't be determined), but whatever system will survive.

So an LLM that yields in perfect theoretical tokens (we really can't verify though what are the perfect tokens), could be less likely to survive than an LLM that develops an internal quirk, but the quirk makes them most likely to be chosen for the next iterations.

If the system was complex enough and could accidentally develop quirks that yield in a meaningfully positive change although not in necessarily next token prediction accuracy, could be ways for some interesting emergent black box behaviour to arise.

replies(2): >>44485643 #>>44485949 #
8. mewpmewp2 ◴[] No.44485621{4}[source]
What I don't get is, why would true randomness give free will, shouldn't it be random will then?
replies(1): >>44485720 #
9. quotemstr ◴[] No.44485643{3}[source]
> Seemingly good next token prediction might or might not increase survival odds.

Our own consciousness comes out of an evolutionary fitness landscape in which _our own_ ability to "predict next token" became a survival advantage, just like it is for LLMs. Imagine the tribal environment: one chimpanzee being able to predict the actions of another gives that first chimpanzee a resources and reproduction advantage. Intelligence in nature is a consequence of runaway evolution optimizing fidelity of our _theory of mind_! "Predict next ape action" eerily similar to "predict next token"!

10. dgfitz ◴[] No.44485720{5}[source]
In the history of mankind, true randomness has never existed.
replies(1): >>44488194 #
11. photochemsyn ◴[] No.44485769[source]
This is an interesting question. The common theme between computers and people is that information has to be protected, and both computer systems and biological systems require additional information-protecting components - eq, error correcting codes for cosmic ray bitflip detection for the one, and DNA mismatch detection enzymes which excise and remove damaged bases for the other. In both cases a lot of energy is spent defending the critical information from the winds of entropy, and if too much damage occurs, the carefully constructed illusion of determinancy collapses, and the system falls apart.

However, this information protection similarity applies to single-celled microbes as much as it does to people, so the question also resolves to whether microbes are deterministic. Microbes both contain and exist in relatively dynamic environments so tiny differences in initial state may lead to different outcomes, but they're fairly deterministic, less so than (well-designed) computers.

With people, while the neural structures are programmed by the cellular DNA, once they are active and energized, the informational flow through the human brain isn't that deterministic, there are some dozen neurotransmitters modulating state as well as huge amounts of sensory data from different sources - thus prompting a human repeatedly isn't at all like prompting an LLM repeatedly. (The human will probably get irritated).

12. ants_everywhere ◴[] No.44485949{3}[source]
> But ultimately LLMs also in a way are trained for survival, since an LLM that fails the tests might not get used in future iterations. So for LLMs it is also survival that is the primary driver, then there will be the subgoals.

I think this is sometimes semi-explicit too. For example, this 2017 OpenAI paper on Evolutionary Algorithms [0] was pretty influential, and I suspect (although I'm an outsider to this field so take it with a grain of salt) that some versions of reinforcement learning that scale for aligning LLMs borrow some performance tricks from OpenAIs genetic approach.

[0] https://openai.com/index/evolution-strategies/

13. bravesoul2 ◴[] No.44488187{3}[source]
Input/output and the mathematical consistency and repeatability of the universe is a religious tenet of science. Believing your eyes is still belief.
14. bravesoul2 ◴[] No.44488194{6}[source]
How do you figure?
replies(1): >>44497181 #
15. ghostofbordiga ◴[] No.44490691{3}[source]
Some accounts of free will are compatible with materialism. On such views "free will" just means the capacity of having intentions and make choices based on an internal debate. Obviously humans have that capacity.
16. dgfitz ◴[] No.44497181{7}[source]
I’d flip the question. Show me something truly random.