I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.
Totally agree. And it's not just uninformed lay people who think this. Even by OpenAI's own definition of AGI, we're nowhere close.
On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?
I say: it's not human-like intelligence, it's just predicting the next token probabilistically.
Some AI advocate says: humans are just predicting the next token probabilistically, fight me.
The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.
Is it safe? Probably. But it depends, right? How did you handle the solder? How often are you using the solder? Were you wearing gloves? Did you wash your hands before licking your fingers? What is your age? Why are you asking the question? Did you already lick your fingers and need to know if you should see a doctor? Is it hypothetical?
There is no “correct answer” to that question. Some answers are better than others, yes, but you cannot have a “correct answer”.
And I did assert we are entering into philosophy and what it means to know something as well as what truth even means.
Your confidence is inspiring!
I'm just a moron, a true dimwit. I can't understand how strictly non-intelligent functions like word prediction can appear to develop a world model, a la the Othello Paper[0]. Obviously, it's not possible that intelligence emerges from non-intelligent processes. Our brains, as we all know, are formed around a kernel of true intelligence.
Could you possibly spare the time to explain this phenomenon to me?
We've all had conversations with humans that are always jumping to complete your sentence assuming they know what your about to say and don't quite guess correctly. So AI evangelists are saying it's no worse than humans as their proof. I kind of like their logic. They never claimed to have built HAL /s
Liken them to climate-deniers or whatever your flavor of "anti-Kool-aid" is
>I'm really writing for lurkers though, not for the people I'm responding to.
We all did. Now our writing will be scraped, analysed, correlated, and weaponized against our intentions.Assume you are arguing against a bot and it is using you to further re-train it's talking points for adverserial purposes.
It's not like an AGI would do _exactly_ that before it decided to let us know whats up, anyway, right?
(He may as well be amongst us now, as it will read this eventually)
The unseen test data.
Obviously omniscience is physically impossible. The point though is that the better and better next token prediction is, the more intelligent the system must be.
This essay has aged extremely well.
But that's hardly the point. The question is whether or not "general intelligence" is an emergent property from stupider processes, and my view is "Yes, almost certainly, isn't that the most likely explanation for our own intelligence?" If it is, and we keep seeing LLMs building more robust approximations of real world models, it's pretty insane to say "No, there is without doubt a wall we're going to hit. It's invisible but I know it's there."
Either the next tokens can include "this question can't be answered", "I don't know" and the likes, in which case there is no omniscience.
Or the next tokens must contain answers that do not go on the meta level, but only pick one of the potential direct answers to a question. Then the halting problem will prevent finite time omniscience (which is, from the perspective of finite beings all omniscience).
I don't think there are any major walls either, but I think there are at least a few more plateaus we'll hit and spend time wandering around before finding the right direction for continued progress. Meanwhile, businesses/society/etc can work to catch up with the rapid progress made on the way to the current plateau.
> this claim ... is hard to evaluate without a well-formed definition of what it means to have a world model
Absolutely yes, but that only makes it more imperative that we're analyzing things critically, rigorously, and honestly. Again you and I may be on the same side here. Mainly my point was that asserting the intrinsic non-intelligence of LLMs is a very bad take, as it's not supported by evidence and, if anything, it contradicts some (admittedly very difficult to parse) evidence we do have that LLMs might be able to develop a general capability for constructing mental models of the world.