←back to thread

577 points simonw | 2 comments | | HN request time: 0.001s | source
Show context
lxgr ◴[] No.44725807[source]
This raises an interesting question I’ve seen occasionally addressed in science fiction before:

Could today’s consumer hardware run a future superintelligence (or, as a weaker hypothesis, at least contain some lower-level agent that can bootstrap something on other hardware via networking or hyperpersuasion) if the binary dropped out of a wormhole?

replies(3): >>44726339 #>>44726465 #>>44732521 #
bob1029 ◴[] No.44726465[source]
This is the premise of all of the ML research I've been into. The only difference is to replace the wormhole with linear genetic programming, neuroevolution, et. al. The size of programs in the demoscene is what originally sent me down this path.

The biggest question I keep asking myself - What is the Kolmogorov complexity of a binary image that provides the exact same capabilities as the current generation LLMs? What are the chances this could run on the machine under my desk right now?

I know how many AAA frames per second my machine is capable of rendering. I refuse to believe the gap between running CS2 at 400fps and getting ~100b/s of UTF8 text out of a NLP black box is this big.

replies(1): >>44726644 #
1. bgirard ◴[] No.44726644[source]
> ~100b/s of UTF8 text out of a NLP black box is this big

That's not a good measure. NP problem solutions are only a single bit, but they are much harder to solve than CS2 frames for large N. If it could solve any problem perfectly, I would pay you billions for just 1b/s of UTF8 text.

replies(1): >>44727722 #
2. bob1029 ◴[] No.44727722[source]
> If it could solve any problem perfectly, I would pay you billions for just 1b/s of UTF8 text.

Exactly. This is what compels me to try.