←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0s | source
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
SoftTalker ◴[] No.44066705[source]
Evolution doesn't happen by "trying things and learning." It happens by random mutation and surviving (if the mutation confers an advantage) or not (if the mutation is harmful). An AI could do this of course, by randomly altering some copies of itself, and keeping them if they are better or discarding them if they are not.
replies(1): >>44073460 #
1. api ◴[] No.44073460[source]
What you just described -- random mutation and survival -- is the process whereby learning occurs. Over time this process transfers information about "how to survive" into the genome and other mediums of heritability.

AI could do this too, but that just means it's using a different learning algorithm. There is a whole field called genetic programming that uses evolution-inspired rather than nervous-system-inspired models and has had success in areas like physical materials engineering, circuit board layout, etc.

It doesn't change the fact that you need more information to go beyond where you are -- I do not believe you can reason from a void into higher levels of... what?