←back to thread

174 points Philpax | 2 comments | | HN request time: 0.472s | source
Show context
dmwilcox ◴[] No.43722753[source]
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

replies(12): >>43722893 #>>43722938 #>>43723051 #>>43723121 #>>43723162 #>>43723176 #>>43723230 #>>43723536 #>>43723797 #>>43724852 #>>43725619 #>>43725664 #
ggreer ◴[] No.43723051[source]
Is there any specific mental task that an average human is capable of that you believe computers will not be able to do?

Also does this also mean that you believe that brain emulations (uploads) are not possible, even given an arbitrary amount of compute power?

replies(2): >>43723098 #>>43725746 #
gloosx ◴[] No.43725746[source]
1. Computers cannot self-rewire like neurons, which means human can pretty much adapt for doing any specific mental task (an "unknown", new task) without explicit retraining, which current computers need to learn something new

2. Computers can't do continuous and unsupervised learning, which means computers require structured input, labeled data, and predefined objectives to learn anything. Humans learn passively all the time just by existing in the environment

replies(1): >>43726521 #
1. imtringued ◴[] No.43726521[source]
Minor nitpicks. I think your points are pretty good.

1. Self-rewiring is just a matter of hardware design. Neuromorphic hardware is a thing.

2. LLM foundation models are actually unsupervised in a way, since they simply take any arbitrary text and try to complete it. It's the instruction fine-tuning that is supervised. (Q/A pairs)

replies(1): >>43734671 #
2. gloosx ◴[] No.43734671[source]
Neuromorphic chips are looking cool, they simulate plasticity — but the circuits are fixed. You can’t sprout a new synaptic route or regrow a broken connection. To self-rewire is not just merely changing your internal state or connections. To self-rewire means to physically grow or shrink new neurons, synapses or pathways, externally, acting from within. This is not looking realistic with the current silicon design.

The point is about unsupervised learning. Once an LLM is trained, its weights are frozen — it won’t update itself during a chat. Prompt-driven Inference is immediate, not persistent, you can define a term or concept mid-chat and it will behave as if it learned it, but only until the context window ends. If it was the other way all models would drift very quickly.