I remember all the hype open ai had done before the release of chat GPT-2 or something where they were so afraid, ooh so afraid to release this stuff and now it's a non-issue. it's all just marketing gimmicks.
On the other hand if you mean, give you the correct answer to your question 100% of the time, then I agree, though then what about things that are only in your mind (guess the number I'm thinking type problems)?
I say: it's not human-like intelligence, it's just predicting the next token probabilistically.
Some AI advocate says: humans are just predicting the next token probabilistically, fight me.
The problem here is that "predicting the next token probabilistically" is a way of framing any kind of cleverness, up to and including magical, impossible omniscience. That doesn't mean it's the way every kind of cleverness is actually done, or could realistically be done. And it has to be the correct next token, where all the details of what's actually required are buried in that term "correct", and sometimes it literally means the same as "likely", and other times that just produces a reasonable, excusable, intelligence-esque effort.
Is it safe? Probably. But it depends, right? How did you handle the solder? How often are you using the solder? Were you wearing gloves? Did you wash your hands before licking your fingers? What is your age? Why are you asking the question? Did you already lick your fingers and need to know if you should see a doctor? Is it hypothetical?
There is no “correct answer” to that question. Some answers are better than others, yes, but you cannot have a “correct answer”.
And I did assert we are entering into philosophy and what it means to know something as well as what truth even means.
We've all had conversations with humans that are always jumping to complete your sentence assuming they know what your about to say and don't quite guess correctly. So AI evangelists are saying it's no worse than humans as their proof. I kind of like their logic. They never claimed to have built HAL /s
The unseen test data.
Obviously omniscience is physically impossible. The point though is that the better and better next token prediction is, the more intelligent the system must be.
This essay has aged extremely well.
Either the next tokens can include "this question can't be answered", "I don't know" and the likes, in which case there is no omniscience.
Or the next tokens must contain answers that do not go on the meta level, but only pick one of the potential direct answers to a question. Then the halting problem will prevent finite time omniscience (which is, from the perspective of finite beings all omniscience).