←back to thread

129 points NotInOurNames | 1 comments | | HN request time: 0s | source
Show context
api ◴[] No.44064830[source]
I'm skeptical. Where will the training data to go beyond human come from?

Humans got to where they are from being embedded in the world. All of biological evolution from archaebacteria to humans was required to get to human. To go beyond human... how? How, without being embodied and trying things and learning? It's one thing to go where there are roads and another thing to go beyond that.

I think a lot of the "foom" people have a fundamentally Platonic or Idealist (in the philosophical sense) view of learning and intelligence. Intelligence is able to reason in a void and construct not only knowledge but itself. You don't have to learn to know -- you can reason from ideal priors.

I think this is fantasy. It's like an informatic / learning perpetual motion machine. Learning requires input from the world. It requires training data. A brain in a vat can't learn anything and it can't reason beyond the bounds of the accumulated knowledge it's already carrying. I don't think it's possible to know without learning or to reach valid conclusions without testing or observing.

I've never seen an attempt to prove such a thing, but my intuition is that there is in fact some kind of conservation law here. Ultimately all information comes from "the universe." Where it comes beyond that, we don't know -- the ultimate origin of information in the universe isn't something we currently cosmologically understand, at least not scientifically. Obviously people have various philosophical and metaphysical ideas.

That being said, it's still quite possible that a "human-level AI" in a raw "IQ" sense that is super-optimized and hyper-focused and tireless could be super-human in many ways. In the human realm I often feel like I'd trade a few IQ points for more focus and motivation and ease at engaging my mind on any task I want. AIs do not have our dopamine system or other biological limitations. They can tirelessly work without rest, without sleep, and in parallel.

So I'm not totally dismissive of the idea that AI could challenge human intelligence or replace human jobs. I'm just skeptical of what I see as the magical fantastic "foom" superintelligence idea that an AI could become self-improving and then explode into realms of god-like intellectual ability. How will it know how to do that? Like a perpetual motion machine -- where is the energy coming from?

replies(8): >>44064947 #>>44064957 #>>44064985 #>>44065137 #>>44065144 #>>44065251 #>>44066705 #>>44067727 #
tux3 ◴[] No.44065144[source]
You can perfectly try things and learn without being embodied. The analogy to how humans learn only goes so far, it's myopic to think anything else is impossible. It's already happening.

The situation today is any benchmark you come up with has a good chance of being saturated within the year. Benchmarks can be used directly to build series of exercises to learn from.

And they do learn. Gradient descend doesn't care whether the training data comes from direct interaction with "the universe" in some deep spiritual sense. It fits the function anyways.

It is much easier to find new questions and new problems than to answer them, so while we do run out of text on the Internet pretty quickly, we don't run out of exercises until far beyond human level.

Look at basic, boring Go self-playing AIs. That's a task with about the same amount of hands on connection to Nature and "the universe" as solving sudokus, writing code, or solving math problems. You don't need very much contact with the real world at all. Well, self play works just fine. It does do self-improvement without any of your mystical philosophical requirements.

With coding it's harder to judge the result, there's no clear win or lose condition. But it's very amenable to trying things out and seeing if you roughly reached your goal. If self-training works with coding, that's all you need.

replies(5): >>44065242 #>>44065410 #>>44065545 #>>44065571 #>>44068254 #
1. yeahokbut ◴[] No.44065545[source]
It’s myopic to think other things are not possible. Sure.

No immutable force of physics acts as a forcing function to continue with AI. That’s all a debatable political/conversation for the aggregate, as the aggregate outnumber tech people.

Computer science researchers are very much a minority and the biological mass of the other billions very capable of doing away with them.

LLMs are a known quantity and while people will make money off them, energy based models will simplify even further the electromagnetic geometry needed to eliminate programmer ecosystem of languages and editors, state, used to ship software. OS will boot strap from a model and scaffold out its internal state. We’ll save resources storing all the developer cruft of the trade and compute cycles running it. We’ll compress down to a purely data driven transform of machine state with a few variadic functions processing model inputs.

Source: have seen it in the lab.

So coding is going away because coding as a requirement was merely a stop gap until manufacturing caught up. The plan to achieve these things was set upon decades ago. It’s why politicians are letting it happen.

So we can do different things. That’s not the question. The question is how do we handle the transition? Violent collapse as ossified pols and self aggrandizing tech bros refuse to understand the reality for Main Street and that doesn’t sit well with human biology with kids to feed?

I for one will cover my ass by going with the flow of my immediate community and if that means get Luigi on the establishment or be considered dead weight and a traitor (say what you want about such social concepts, they are what the majority live by) well sorry tech bros but my biology means more to me than yours. Pew pew.

Yes you present a grammatically correct sentence with a consistent internal logic. You’re still one of billions and our country, you, let’s random unknowns die in the street every day. Humanity won’t bat an eye wiping out some coder bros.