←back to thread

322 points atomroflbomber | 4 comments | | HN request time: 0.001s | source
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
AbrahamParangi ◴[] No.36987177[source]
What exactly is your definition of a AGI? Because we’re already passing the Turing test, and so I have to wonder if this isn’t just moving the goalposts.
replies(5): >>36987466 #>>36987583 #>>36988222 #>>36988633 #>>36989206 #
cubefox ◴[] No.36987583[source]
ChatGPT (instruction tuned autoregressive language models) indeed already seems quite general (it's good at conversation Turing tests without faking it like ELIZA), even if the absolute intelligence is limited. Level of generality and intelligence is not the same. Something could be quite narrow but very intelligent (AlphaGo) or quite general but dumb overall (small kid, insect).

Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.

replies(6): >>36987787 #>>36987803 #>>36988173 #>>36988756 #>>36988809 #>>36989041 #
1. tuukkah ◴[] No.36988756{3}[source]
> it's good at conversation Turing tests without faking it like ELIZA

Just like ELIZA can be said to be faking it, ChatGPT is faking it in a different way.

replies(1): >>36997538 #
2. cubefox ◴[] No.36997538[source]
If ChatGPT is faking it, we are faking it. We are not faking it. Therefore ChatGPT isn't faking it.
replies(1): >>36997794 #
3. tuukkah ◴[] No.36997794[source]
We are not doing the same as ChatGPT. For instance: Because of its training, ChatGPT tries to answer like a human. Humans don't try to answer like a human.
replies(1): >>36998588 #
4. cubefox ◴[] No.36998588{3}[source]
We do try to answer in a way that makes us understood by other humans.