Most active commenters
  • cubefox(7)

←back to thread

322 points atomroflbomber | 33 comments | | HN request time: 0.762s | source | bottom
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
1. AbrahamParangi ◴[] No.36987177[source]
What exactly is your definition of a AGI? Because we’re already passing the Turing test, and so I have to wonder if this isn’t just moving the goalposts.
replies(5): >>36987466 #>>36987583 #>>36988222 #>>36988633 #>>36989206 #
2. ◴[] No.36987466[source]
3. cubefox ◴[] No.36987583[source]
ChatGPT (instruction tuned autoregressive language models) indeed already seems quite general (it's good at conversation Turing tests without faking it like ELIZA), even if the absolute intelligence is limited. Level of generality and intelligence is not the same. Something could be quite narrow but very intelligent (AlphaGo) or quite general but dumb overall (small kid, insect).

Okay, ChatGPT is only text-to-text, but Google & Co are adding more modalities now, including images, audio and robotics. I think one missing step is to fuse training and inference regime into one, just as in animals. That probably requires something else than the usual transformer-based token predictors.

replies(6): >>36987787 #>>36987803 #>>36988173 #>>36988756 #>>36988809 #>>36989041 #
4. Demotooodo ◴[] No.36987787[source]
Yepp. I also see puzzle peaces falling in place with multimodal models.

This will be a great strategy very fast.

It shows to be quite good for image generation already

5. phkahler ◴[] No.36987803[source]
>> I think one missing step is to fuse training and inference regime into one, just as in animals.

This has always been an important missing piece. Without it ChatGPT is just a natural language interface to the information it was trained on. Still useful but unable to learn (aside from context).

6. reader5000 ◴[] No.36988173[source]
> I think one missing step is to fuse training and inference regime into one, just as in animals

It's not clear it is one. Sleep is training (replay from hippocampus). Wake is inference.

replies(2): >>36988488 #>>36989917 #
7. emmanueloga_ ◴[] No.36988222[source]
Self consciousness. Human-level of reasoning. Feelings, etc.

We are NOT close to AGI.

* Fancy Markov chain (LLM) is not AGI.

* Stable diffusion style of image generation is NOT AGI.

* Fancy computer vision is NOT AGI.

Honestly, I don't think we are any closer to AGI. What we are seeing is the peak of "fancy tricks" for computer generated artifacts.

replies(3): >>36988484 #>>36989444 #>>36989557 #
8. mattwest ◴[] No.36988484[source]
Aren't we all just performing probablistic decision paths in our own minds? Would "feelings" improve decision accuracy in artificial systems?
replies(3): >>36988708 #>>36988777 #>>36989873 #
9. cubefox ◴[] No.36988488{3}[source]
Sleep could be for long term memory, but clearly not everything else is "context" (short term memory). Maybe you learn something in the morning which requires you to remember it for >12 hours before you go to bed.
10. prmph ◴[] No.36988633[source]
A Mechanical Turk can also pass the Turing test. As a black box, it is indistinguishable from an LLM, and yet that would not be evidence of AGI.

So what's your point?

11. emmanueloga_ ◴[] No.36988708{3}[source]
> Aren't we all just performing probabilistic decision paths in our own minds?

Wild speculation. The human brain is still pretty much a black box.

> Would "feelings" improve decision accuracy in artificial systems?

Hard to tell, since we haven't observed any cases of sentient A.I. (able to feel). The only general intelligence we know (humans) have feelings as one of the most prominent features, so much so that "accuracy" is not the main driver for any given human... far from it. I don't know of any human that couldn't in one way or another be classified as "irrational".

12. tuukkah ◴[] No.36988756[source]
> it's good at conversation Turing tests without faking it like ELIZA

Just like ELIZA can be said to be faking it, ChatGPT is faking it in a different way.

replies(1): >>36997538 #
13. mattbuilds ◴[] No.36988777{3}[source]
Obviously we can’t prove this, but my instincts are that we don’t do things with probabilistic decision paths. Not very scientific of me, but I just don’t buy that’s how we make decisions.
replies(1): >>36989437 #
14. TylerE ◴[] No.36988809[source]
One distinction I would make is that at true AGI should have internet access and be able to query for updated information, instead of being stuck in the time moment it's trained.
replies(3): >>36989054 #>>36989781 #>>36998596 #
15. NoMoreNicksLeft ◴[] No.36989041[source]
I feel like though I speak the same language as everyone else, at least nominally, none of you are using the same definitions as I do for any of these terms.

AGI was the result of people using the older term "AI" for things that hadn't turned out to be what we thought AI was going to be.

Like alot of technology terms, all of this has its orgins in science fiction, when AI was supposed to be the equivalent of a human mind, but constructed out of something other than meat. The AI would have agency, it would do things... and do them because it wanted to. It would have goals, that it might fail or succeed at. And it would learn... a proper AI might be constructed to know nothing about a particular subject, but it could then go on to learn (on its own without any outside help) all about that topic. Perhaps even to the point of conducting its own original research to learn more. A sufficiently intelligent AI would go on to learn things no human had ever learned, to invent and theorize inventions and theories no human had conceived of.

But then we all realized that intelligence might be severable from those other parts, and we might have an "oracle" that when asked questions could provide sensible answers, but would have no agency. That wouldn't be able to learn in any real way, but since it already knew the sensible answers, that didn't matter.

And at that point, you see AGI start being used. And I assumed it meant "well, that is what we'll call Asimov's robots, or Skynet, or whatever".

Except, here you are again using AGI to mean the dumb oracles that aren't intelligent in any meaningful way.

Like, wtf.

replies(1): >>36998635 #
16. shrimpx ◴[] No.36989054{3}[source]
True AGI should have a personality, tastes, opinions, and feelings; and negotiate its role in a social hierarchy.
replies(1): >>36989590 #
17. shrimpx ◴[] No.36989206[source]
How do you pass the Turing test with “As an AI, I don’t have opinions and I don’t know shit after September 2021”

The idea that GPT4 passed the Turing test is preposterous unless the test is a much more restricted version of what I think it is — in which case it would be meaningless.

replies(1): >>36990657 #
18. Workaccount2 ◴[] No.36989437{4}[source]
There really isn't room for much else.

Decisions making in our universe is a 1-dimensional slider between deterministic and random. That's it.

Write a program that makes non-deterministic, non-random (or any combination) decisions. You can't. It's like asking to create a new primary color.

19. AbrahamParangi ◴[] No.36989444[source]
Look carefully at these goals and tell me if these are materially falsifiable. Can you imagine a test that determines whether or not a system has self consciousness?

If such a test exists we could interrogate if a system of some design might pass it, but if such a test does not exist and we cannot even imagine it then you’re talking about something that is unfalsifiable - which is another way of saying “effectively fake”.

replies(1): >>36993327 #
20. khazhoux ◴[] No.36989557[source]
I would say that self-conciousness and feelings are not requirements for AGI. But reasoning certainly is.
21. oefnak ◴[] No.36989590{4}[source]
Which will obviously be on top.
22. 98codes ◴[] No.36989781{3}[source]
I'd say the opposite -- compare it to us: if you take away our internet we get cranky, but we don't lose all ability to think intelligently.
replies(1): >>36989809 #
23. TylerE ◴[] No.36989809{4}[source]
Imagine if someone took away your speech, hearing, TV, radio, newspaper, and the ability to order new books - you only had access to the knowledge you already have. You're only allowed to communicate via serial terminal, and can only respond, not initiate.
replies(1): >>36998606 #
24. godelski ◴[] No.36989873{3}[source]
Mammal brains don't every turn off. They are always learning. If you've ever gone to sleep (if you haven't, let me know), or observed any animal sleeping, you'll notice that this machine is able to create highly realistic simulations of its environment (aka: dream). Both people and dogs wake up from nightmares and for a bit have trouble distinguishing reality. My cat does this with a tiny 30g brain. (Even a hamster sleeps and dreams) She even simulates her environment while awake and you can see complex decision processing. How she can predict the path that objects will take in the air, be able to predict the location of a moving object even as it goes behind a wall. She is even able to update this belief with other clues like sound or using a mirror. She quickly is able to update her behavior if I try to trick her. She can do this invariant to colors, rotation, novel lighting dynamics, novel environments, and under many more conditions. When I move to a new place I can drop her in the litterbox after I first take her out of her cage, she'll run to find somewhere to hide, and then trivially navigate back to it when she needs to use it, without any prodding. She may be stubborn and not want to perform some tricks sometimes, but that doesn't mean she hasn't learned them.

These are far more complex tasks than many give credit for, and there are a lot more that she can do (even that dumb fucking hamster). Just because she can't speak doesn't make her intelligent, the same way that just because GPT does doesn't make it. What's key here is the generalization part. Yeah, there are failures, but clearly my cat's intelligence is highly generalized. You don't throw her off by minor perturbations of the environment. If I change the bowl that her food gets poured into, she still comes running, and can differentiate this from a bowl of cereal. She's robust to orientation of an object or even herself. We don't see remotely this robustness in ANY AI systems. While they can do impressive things, we still haven't beaten that fucking hamster.

25. godelski ◴[] No.36989917{3}[source]
> Wake is inference.

I'm not sure how anyone could be this naive. Mammal brains don't have this train mode inference mode. They are both running at all times. If what you said was true, if I taught you something today you wouldn't be able to perform that action till tomorrow. Hell, schools would be an insane concept if this were true. Try to think a bit more before confidently stating an answer.

26. jimmySixDOF ◴[] No.36990657[source]
GPT4 can pass the Turing Test! The Turing test line has been crossed more times than most Popes some would say the Eliza Effect fooled people enough in the 60s to count. No comment on the AGI claim you are responding to - but them putting GPT4 in a septic bubble suit is not relevant.
27. azinman2 ◴[] No.36993327{3}[source]
Consciousness is not important for AGI. Being able to learn new skills, adapt to new sensors, transfer knowledge across domains, learn at all, plan, replan, achieve under specified goals and more are what’s required for AGI.

Plenty has been written about the requirements for decades now. That hasn’t changed.

28. cubefox ◴[] No.36997538{3}[source]
If ChatGPT is faking it, we are faking it. We are not faking it. Therefore ChatGPT isn't faking it.
replies(1): >>36997794 #
29. tuukkah ◴[] No.36997794{4}[source]
We are not doing the same as ChatGPT. For instance: Because of its training, ChatGPT tries to answer like a human. Humans don't try to answer like a human.
replies(1): >>36998588 #
30. cubefox ◴[] No.36998588{5}[source]
We do try to answer in a way that makes us understood by other humans.
31. cubefox ◴[] No.36998596{3}[source]
Bing does that all the time.
32. cubefox ◴[] No.36998606{5}[source]
Bing Chat already accepts images as input. AutoGPT can initiate.
33. cubefox ◴[] No.36998635{3}[source]
GPT-4 is clearly not "dumb".