←back to thread

A non-anthropomorphized view of LLMs

(addxorrol.blogspot.com)
477 points zdw | 1 comments | | HN request time: 0.205s | source
1. jumploops ◴[] No.44495212[source]
> I cannot begin putting a probability on "will this human generate this sequence".

Welcome to the world of advertising!

Jokes aside, and while I don't necessarily believe transformers/GPUs are the path to AGI, we technically already have a working "general intelligence" that can survive on just an apple a day.

Putting that non-artificial general intelligence up on a pedestal is ironically the cause of "world wars and murderous ideologies" that the author is so quick to defer to.

In some sense, humans are just error-prone meat machines, whose inputs/outputs can be confined to a specific space/time bounding box. Yes, our evolutionary past has created a wonderful internal RNG and made our memory system surprisingly fickle, but this doesn't mean we're gods, even if we manage to live long enough to evolve into AGI.

Maybe we can humble ourselves, realize that we're not too different from the other mammals/animals on this planet, and use our excess resources to increase the fault tolerance (N=1) of all life from Earth (and come to the realization that any AGI we create, is actually human in origin).