Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.
This will be a great strategy very fast.
It shows to be quite good for image generation already
This has always been an important missing piece. Without it ChatGPT is just a natural language interface to the information it was trained on. Still useful but unable to learn (aside from context).
It's not clear it is one. Sleep is training (replay from hippocampus). Wake is inference.
We are NOT close to AGI.
* Fancy Markov chain (LLM) is not AGI.
* Stable diffusion style of image generation is NOT AGI.
* Fancy computer vision is NOT AGI.
Honestly, I don't think we are any closer to AGI. What we are seeing is the peak of "fancy tricks" for computer generated artifacts.
Wild speculation. The human brain is still pretty much a black box.
> Would "feelings" improve decision accuracy in artificial systems?
Hard to tell, since we haven't observed any cases of sentient A.I. (able to feel). The only general intelligence we know (humans) have feelings as one of the most prominent features, so much so that "accuracy" is not the main driver for any given human... far from it. I don't know of any human that couldn't in one way or another be classified as "irrational".
AGI was the result of people using the older term "AI" for things that hadn't turned out to be what we thought AI was going to be.
Like alot of technology terms, all of this has its orgins in science fiction, when AI was supposed to be the equivalent of a human mind, but constructed out of something other than meat. The AI would have agency, it would do things... and do them because it wanted to. It would have goals, that it might fail or succeed at. And it would learn... a proper AI might be constructed to know nothing about a particular subject, but it could then go on to learn (on its own without any outside help) all about that topic. Perhaps even to the point of conducting its own original research to learn more. A sufficiently intelligent AI would go on to learn things no human had ever learned, to invent and theorize inventions and theories no human had conceived of.
But then we all realized that intelligence might be severable from those other parts, and we might have an "oracle" that when asked questions could provide sensible answers, but would have no agency. That wouldn't be able to learn in any real way, but since it already knew the sensible answers, that didn't matter.
And at that point, you see AGI start being used. And I assumed it meant "well, that is what we'll call Asimov's robots, or Skynet, or whatever".
Except, here you are again using AGI to mean the dumb oracles that aren't intelligent in any meaningful way.
Like, wtf.
The idea that GPT4 passed the Turing test is preposterous unless the test is a much more restricted version of what I think it is — in which case it would be meaningless.
Decisions making in our universe is a 1-dimensional slider between deterministic and random. That's it.
Write a program that makes non-deterministic, non-random (or any combination) decisions. You can't. It's like asking to create a new primary color.
If such a test exists we could interrogate if a system of some design might pass it, but if such a test does not exist and we cannot even imagine it then you’re talking about something that is unfalsifiable - which is another way of saying “effectively fake”.
These are far more complex tasks than many give credit for, and there are a lot more that she can do (even that dumb fucking hamster). Just because she can't speak doesn't make her intelligent, the same way that just because GPT does doesn't make it. What's key here is the generalization part. Yeah, there are failures, but clearly my cat's intelligence is highly generalized. You don't throw her off by minor perturbations of the environment. If I change the bowl that her food gets poured into, she still comes running, and can differentiate this from a bowl of cereal. She's robust to orientation of an object or even herself. We don't see remotely this robustness in ANY AI systems. While they can do impressive things, we still haven't beaten that fucking hamster.
I'm not sure how anyone could be this naive. Mammal brains don't have this train mode inference mode. They are both running at all times. If what you said was true, if I taught you something today you wouldn't be able to perform that action till tomorrow. Hell, schools would be an insane concept if this were true. Try to think a bit more before confidently stating an answer.
Plenty has been written about the requirements for decades now. That hasn’t changed.