←back to thread

335 points ingve | 3 comments | | HN request time: 0s | source
Show context
alchemist1e9 ◴[] No.45083047[source]
And these are the same quantum computers that will eventually break ecliptic curve cryptography? Now I’m very confused.
replies(7): >>45083084 #>>45083090 #>>45083099 #>>45083208 #>>45083232 #>>45083263 #>>45083447 #
santiagobasulto ◴[] No.45083099[source]
The potential is there, we haven't made it yet. It's the same with AI, AGI and all that. Imagine if you'd read a response from GPT-2 back in 2019, you'd also be like "and these are the same models that will eventually give us AGI".
replies(2): >>45083149 #>>45083200 #
1. heyjamesknight ◴[] No.45083149[source]
Not a great analogy, since there’s zero chance the kinds of model involved in GPT-2 will give us AGI.
replies(1): >>45083243 #
2. ACCount37 ◴[] No.45083243[source]
Zero? Aren't you a little bit overconfident on that?

Transformer LLMs already gave us the most general AI as of yet, by far - and they keep getting developed further, with a number of recent breakthroughs and milestones.

replies(2): >>45083328 #>>45110041 #
3. heyjamesknight ◴[] No.45110041[source]
No. The fundamental encoding unit of an LLM is semantic. Mapping reality into semantic space is a form of lossy compression. There are entire categories of experience that can't be properly modeled in semantic space.

Even in "multimodal" models, text is still the fundamental unit of data storage and transformation between the modes. That's not the case for how your brain works—you don't see a pigeon, label it as "pigeon," and then refer to your knowledge about "pigeons". You just experience the pigeon.

We have 100K years of homo sapiens thriving without language. "General Intelligence" occurs at a level above semantics.