←back to thread

174 points Philpax | 2 comments | | HN request time: 0s | source
Show context
dmwilcox ◴[] No.43722753[source]
I've been saying this for a decade already but I guess it is worth saying here. I'm not afraid AI or a hammer is going to become intelligent (or jump up and hit me in the head either).

It is science fiction to think that a system like a computer can behave at all like a brain. Computers are incredibly rigid systems with only the limited variance we permit. "Software" is flexible in comparison to creating dedicated circuits for our computations but is nothing by comparison to our minds.

Ask yourself, why is it so hard to get a cryptographically secure random number? Because computers are pure unadulterated determinism -- put the same random seed value in your code and get the same "random numbers" every time in the same order. Computers need to be like this to be good tools.

Assuming that AGI is possible in the kinds of computers we know how to build means that we think a mind can be reduced to a probabilistic or deterministic system. And from my brief experience on this planet I don't believe that premise. Your experience may differ and it might be fun to talk about.

In Aristotle's ethics he talks a lot about ergon (purpose) -- hammers are different than people, computers are different than people, they have an obvious purpose (because they are tools made with an end in mind). Minds strive -- we have desires, wants and needs -- even if it is simply to survive or better yet thrive (eudaimonia).

An attempt to create a mind is another thing entirely and not something we know how to start. Rolling dice hasn't gotten anywhere. So I'd wager AGI somewhere in the realm of 30 years to never.

replies(12): >>43722893 #>>43722938 #>>43723051 #>>43723121 #>>43723162 #>>43723176 #>>43723230 #>>43723536 #>>43723797 #>>43724852 #>>43725619 #>>43725664 #
CooCooCaCha ◴[] No.43722893[source]
This is why I think philosophy has become another form of semi-religious kookery. You haven't provided any actual proof or logical reason for why a computer couldn't be intelligent. If randomness is required then sample randomness from the real world.

It's clear that your argument is based on feels and you're using philosophy to make it sound more legitimate.

replies(2): >>43723074 #>>43723225 #
1. biophysboy ◴[] No.43723074[source]
Brains are low-frequency, energy-efficient, organic, self-reproducing, asynchronous, self-repairing, and extremely highly connected (thousands of synapses). If AGI is defined as "approximate humans", I think its gonna be a while.

That said, I don't think computers need to be human to have an emergent intelligence. It can be different in kind if not in degree.

replies(1): >>43723327 #
2. cmsj ◴[] No.43723327[source]
Just to put some numbers on "extremely highly connected" - there are about 90 billion neurons in a human brain, but the connections between them number in the range of 100 trillion.

That is one hell of a network, and it can all operate fully in parallel while continuously training itself. Computers have gotten pretty good at doing things in parallel, but not that good.