←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0.211s | source
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
tux3 ◴[] No.44065144[source]
You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

replies(5): >>44065242 #>>44065410 #>>44065545 #>>44065571 #>>44068254 #
skywhopper ◴[] No.44065410[source]
But how does AI try and learn anything that’s not entirely theoretical? Your example of Go contradicts your point. Deep learning made a model that can play Go really well, but as you say, it’s a finite problem disconnected from real-world implications, ambiguities, and unknowns. How does AI deal with unknowns about the real world?
replies(1): >>44066191 #
1. tux3 ◴[] No.44066191[source]
I don't think putting them in the real world during training is a short-term goal, so you won't find this satisfying, but I would be perfectly okay with leaving that for later. If we can reach AI coders that are superhuman at self-improving, we will have increased our capacity to solve problems so much that it is better to wait and solve the problem later than to try to handwave a solution now.

Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.

I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.