←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0.204s | source
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
1. disambiguation ◴[] No.44067727[source]
> You don't have to learn to know -- you can reason from ideal priors.

This is kind of how math works. There are plenty of mathematical concepts consistent and true yet useless (as in no relation to anything tangible). Although you could argue that we only figured out things like Pi because we had the initial, practical inspiration of counting on our fingers. But mathematical truth probably could exist in a vacuum.

> A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying.

It makes sense that knowledge and information are derived from primary data (our physical experience) yet the brain in a vat idea is still an interesting thought experiment (no pun intended). It's not that the brain wouldn't keep busy given the mind's ability to imagine, but it would likely invent a set of information that is all nonsense. Physical reality makes imagination coherent, yet imagination is necessary to make the leaps forward.

> Ultimately all information comes from "the universe." Where it comes beyond that, we don't know

That's an interesting assertion - knowledge and information are both dependent and limited by the universe and our ability to experience it, as well proxies for experience (scientific measurement).

Though information is itself an abstraction, like a text editor versus the trillion transistors of a processor - we're not concerned with each and every particle dancing around the room but instead with simplified abstractions and useful approximations. We call these models "the truth" and assert that the universe is governed by exact laws. We might as well exist inside a simulation in which we are slowly but surely reverse engineering the source code.

That assumption is the crux of intelligence - there is an objective truth, it is knowable, and intelligence can be defined (at least partially) as the breadth, quality, and utilization of information it possesses - otherwise you're just a brain in a vat churning out nonsense. Ironically, we're making these assumptions from a position of imperfect information. We don't know that's how it works, so our reasoning may be imperfect.

Information existing "beyond the universe" becomes a useless notion since we only care about information such that it maps to reality (at least as a prerequisite for intelligence).

A more troubling proposition is whether the reality of the universe exists beyond what can be imagined?

> How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

I suppose once it's able to measure all things around it, including itself, it will be able to achieve "gradient ascent".

> Where will the training data to go beyond human come from?

I think its clear that LLMs are not the future, at least not alone. As you state, knowing all man made roads is not the same as being able to invent your own. If I had to bet, its more likely to come from something like AlphaFold - a Solver that tells us how to make better thinking machines. In the interim, we have tireless stochastic parrots, which have their merits, but are decidedly not the proto super intelligence that tech bros love to get hyped up over.