Most active commenters
  • api(4)
  • tux3(3)

←back to thread

129 points NotInOurNames | 11 comments | | HN request time: 1.351s | source | bottom
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
1. tux3 ◴[] No.44065144[source]
You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

replies(5): >>44065242 #>>44065410 #>>44065545 #>>44065571 #>>44068254 #
2. palata ◴[] No.44065242[source]
> It fits the function anyways.

And then it works well when interpolating, less so when extrapolating. Not sure how much novelty we can get from interpolation...

> It is much easier to find new questions and new problems than to answer them

Which doesn't mean, at all, that it is easy to find new questions about stuff you can't imagine.

3. skywhopper ◴[] No.44065410[source]
But how does AI try and learn anything that’s not entirely theoretical? Your example of Go contradicts your point. Deep learning made a model that can play Go really well, but as you say, it’s a finite problem disconnected from real-world implications, ambiguities, and unknowns. How does AI deal with unknowns about the real world?
replies(1): >>44066191 #
4. yeahokbut ◴[] No.44065545[source]
It’s myopic to think other things are not possible. Sure.

No immutable force of physics acts as a forcing function to continue with AI. That’s all a debatable political/conversation for the aggregate, as the aggregate outnumber tech people.

Computer science researchers are very much a minority and the biological mass of the other billions very capable of doing away with them.

LLMs are a known quantity and while people will make money off them, energy based models will simplify even further the electromagnetic geometry needed to eliminate programmer ecosystem of languages and editors, state, used to ship software. OS will boot strap from a model and scaffold out its internal state. We’ll save resources storing all the developer cruft of the trade and compute cycles running it. We’ll compress down to a purely data driven transform of machine state with a few variadic functions processing model inputs.

Source: have seen it in the lab.

So coding is going away because coding as a requirement was merely a stop gap until manufacturing caught up. The plan to achieve these things was set upon decades ago. It’s why politicians are letting it happen.

So we can do different things. That’s not the question. The question is how do we handle the transition? Violent collapse as ossified pols and self aggrandizing tech bros refuse to understand the reality for Main Street and that doesn’t sit well with human biology with kids to feed?

I for one will cover my ass by going with the flow of my immediate community and if that means get Luigi on the establishment or be considered dead weight and a traitor (say what you want about such social concepts, they are what the majority live by) well sorry tech bros but my biology means more to me than yours. Pew pew.

Yes you present a grammatically correct sentence with a consistent internal logic. You’re still one of billions and our country, you, let’s random unknowns die in the street every day. Humanity won’t bat an eye wiping out some coder bros.

5. api ◴[] No.44065571[source]
> it's myopic to think anything else is impossible. It's already happening.

Well, hey, I could be wrong. If I am, I just had a weird thought. Maybe that's our Fermi paradox answer.

If it's possible to reason ex nihilo to truth and reality, then reality and the universe are beyond a point superfluous. Maybe what happens out there is that intelligences go "foom," become superintelligences, and then no longer need to explore. They can rationally, from first principles, elucidate everything that could conceivably exist, especially once they have a complete model of physics. You don't need to go anywhere or look at anything because it's already implied by logic, math, and reason.

... and ... that's why I think this is wrong, and it's a fantasy. It fails some kind of absurdity test. If it is possible, then there's something very weird about existence, like we're in a simulation or something.

replies(1): >>44066595 #
6. tux3 ◴[] No.44066191[source]
I don't think putting them in the real world during training is a short-term goal, so you won't find this satisfying, but I would be perfectly okay with leaving that for later. If we can reach AI coders that are superhuman at self-improving, we will have increased our capacity to solve problems so much that it is better to wait and solve the problem later than to try to handwave a solution now.

Maybe there is some barrier that requires physical interaction with the real world, that's possible. But just looking at current LLMs, they seem plenty comfortable with implications, ambiguities and unknowns. There's a sense where we still see them as primitive mechanical robots, when they already understand language and predict written thoughts in all its messiness and uncertainty.

I think we should focus on the easier problem of making AIs really good on theoretical tasks - electronic environments are much cheaper and faster than the real world - and we may find out that it's just another one of those things like winnograd schemas, writing poetry, passing a turing test, or making art that most people can't tell apart from human art; things that were uniquely human or that we thought would definitely require AGI, but that are now boring and obviously easy.

7. tux3 ◴[] No.44066595[source]
A simpler reason why it fails: You always need more energy. Every sort of development seems to correlate with energy use. You don't explore for the sake of learning something about another floating rock in space, you explore because that's where more resources are.
8. mrandish ◴[] No.44068254[source]
> You can perfectly try things and learn without being embodied.

A brilliant 'brain in a vat' can come up with novel answers to questions but outside of narrow categories like pure mathematics and logic which can be internally validated, the brain can't know how correct or incorrect its novel answers are without some way to objectively test, observe and validate the correctness of its answers in the relevant domain (ie 'the real world'). A model can only be as useful as its parameters correctly reflect the modeled target. Even very complex and detailed simulations tend to quickly de-correlate when they repeatedly run untethered from ground truth. Games like Go have clear rule sets. Reality doesn't.

replies(1): >>44075624 #
9. api ◴[] No.44075624[source]
That was better said in some ways than my own comment.
replies(1): >>44076468 #
10. mrandish ◴[] No.44076468{3}[source]
Thanks! And I just saw in a parallel post currently on the HN home page, John Carmack also said a similar thing in his lecture notes.

> "offline training can bootstrap itself off into a coherent fantasy untested by reality."

"a coherent fantasy untested by reality" is a lovely turn of phrase.

replies(1): >>44077539 #
11. api ◴[] No.44077539{4}[source]
The Yudkowskiites are all about coherent fantasies untested by reality, as are… really… a lot of philosophers throughout history. Maybe most.

I fundamentally do not believe in knowing without sensing or learning without experiencing. Of course it need not be direct experience. You can “download” information. But that information had to be gathered somehow at some point. There is only so much training data.

As I said — I don’t dismiss the idea of AI challenging human intellect or replacing human jobs. An AI “merely” as smart as a human but tireless and faster could seem superhuman. I am just intensely skeptical of the idea of something learning to self improve and then magically taking off into some godlike superintelligence realm far beyond what is latent in or implied by its training data. That would be an informatic perpetual motion machine. It would, in fact, be magic, in the fantasy sense.