←back to thread

333 points mooreds | 2 comments | | HN request time: 0s | source
Show context
izzydata ◴[] No.44484180[source]
Not only do I not think it is right around the corner. I'm not even convinced it is even possible or at the very least I don't think it is possible using conventional computer hardware. I don't think being able to regurgitate information in an understandable form is even an adequate or useful measurement of intelligence. If we ever crack artificial intelligence it's highly possible that in its first form it is of very low intelligence by humans standards, but is truly capable of learning on its own without extra help.
replies(10): >>44484210 #>>44484226 #>>44484229 #>>44484355 #>>44484381 #>>44484384 #>>44484386 #>>44484439 #>>44484454 #>>44484478 #
Waterluvian ◴[] No.44484386[source]
I think the only way that it’s actually impossible is if we believe that there’s something magical and fundamentally immeasurable about humans that leads to our general intelligence. Otherwise we’re just machines, after all. A human brain is theoretically reproducible outside standard biological mechanisms, if you have a good enough nanolathe.

Maybe our first AGI is just a Petri dish brain with a half-decent python API. Maybe it’s more sand-based, though.

replies(8): >>44484413 #>>44484436 #>>44484490 #>>44484539 #>>44484739 #>>44484759 #>>44485168 #>>44487032 #
somewhereoutth ◴[] No.44484490[source]
Our silicon machines exist in a countable state space (you can easily assign a unique natural number to any state for a given machine). However, 'standard biological mechanisms' exist in an uncountable state space - you need real numbers to properly describe them. Cantor showed that the uncountable is infinitely more infinite (pardon the word tangle) than the countable. I posit that the 'special sauce' for sentience/intelligence/sapience exists beyond the countable, and so is unreachable with our silicon machines as currently envisaged.

I call this the 'Cardinality Barrier'

replies(9): >>44484527 #>>44484530 #>>44484534 #>>44484541 #>>44484590 #>>44484606 #>>44484612 #>>44484664 #>>44485305 #
bakuninsbart ◴[] No.44484664[source]
Cantor talks about countable and uncountable infinities, both computer chips and human brains are finite spaces. The human brain has roughly 100b neurons, even if each of these had an edge with each other and these edges could individually light up signalling different states of mind, isn't that just `2^100b!`? That's roughly as far away from infinity as 1.
replies(1): >>44484792 #
1. somewhereoutth ◴[] No.44484792[source]
But this signalling (and connections) may be more complex than connected/unconnected and on/off, such that we cannot completely describe them [digitally/using a countable state space] as we would with silicon.
replies(1): >>44485200 #
2. chowells ◴[] No.44485200[source]
If you think it can't be done with a countable state space, then you must know some physics that the general establishment doesn't. I'm sure they would love to know what you do.

As far as physicists believe at the moment, there's no way to ever observe a difference below the Planck level. Energy/distance/time/whatever. They all have a lower boundary of measurability. That's not as a practical issue, it's a theoretical one. According to the best models we currently have, there's literally no way to ever observe a difference below those levels.

If a difference smaller than that is relevant to brain function, then brains have a way to observe the difference. So I'm sure the field of physics eagerly awaits your explanation. They would love to see an experiment thoroughly disagree with a current model. That's the sort of thing scientists live for.

replies(1): >>44486650 #