←back to thread

129 points NotInOurNames | 3 comments | | HN request time: 0.001s | source
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
tux3 ◴[] No.44065144[source]
You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

replies(5): >>44065242 #>>44065410 #>>44065545 #>>44065571 #>>44068254 #
mrandish ◴[] No.44068254[source]
> You can perfectly try things and learn without being embodied.

A brilliant 'brain in a vat' can come up with novel answers to questions but outside of narrow categories like pure mathematics and logic which can be internally validated, the brain can't know how correct or incorrect its novel answers are without some way to objectively test, observe and validate the correctness of its answers in the relevant domain (ie 'the real world'). A model can only be as useful as its parameters correctly reflect the modeled target. Even very complex and detailed simulations tend to quickly de-correlate when they repeatedly run untethered from ground truth. Games like Go have clear rule sets. Reality doesn't.

replies(1): >>44075624 #
1. api ◴[] No.44075624[source]
That was better said in some ways than my own comment.
replies(1): >>44076468 #
2. mrandish ◴[] No.44076468[source]
Thanks! And I just saw in a parallel post currently on the HN home page, John Carmack also said a similar thing in his lecture notes.

> "offline training can bootstrap itself off into a coherent fantasy untested by reality."

"a coherent fantasy untested by reality" is a lovely turn of phrase.

replies(1): >>44077539 #
3. api ◴[] No.44077539[source]
The Yudkowskiites are all about coherent fantasies untested by reality, as are… really… a lot of philosophers throughout history. Maybe most.

I fundamentally do not believe in knowing without sensing or learning without experiencing. Of course it need not be direct experience. You can “download” information. But that information had to be gathered somehow at some point. There is only so much training data.

As I said — I don’t dismiss the idea of AI challenging human intellect or replacing human jobs. An AI “merely” as smart as a human but tireless and faster could seem superhuman. I am just intensely skeptical of the idea of something learning to self improve and then magically taking off into some godlike superintelligence realm far beyond what is latent in or implied by its training data. That would be an informatic perpetual motion machine. It would, in fact, be magic, in the fantasy sense.