←back to thread

322 points atomroflbomber | 1 comments | | HN request time: 0.2s | source
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
AbrahamParangi ◴[] No.36987177[source]
What exactly is your definition of a AGI? Because we’re already passing the Turing test, and so I have to wonder if this isn’t just moving the goalposts.
replies(5): >>36987466 #>>36987583 #>>36988222 #>>36988633 #>>36989206 #
emmanueloga_ ◴[] No.36988222[source]
Self consciousness. Human-level of reasoning. Feelings, etc.

We are NOT close to AGI.

* Fancy Markov chain (LLM) is not AGI.

* Stable diffusion style of image generation is NOT AGI.

* Fancy computer vision is NOT AGI.

Honestly, I don't think we are any closer to AGI. What we are seeing is the peak of "fancy tricks" for computer generated artifacts.

replies(3): >>36988484 #>>36989444 #>>36989557 #
mattwest ◴[] No.36988484[source]
Aren't we all just performing probablistic decision paths in our own minds? Would "feelings" improve decision accuracy in artificial systems?
replies(3): >>36988708 #>>36988777 #>>36989873 #
1. godelski ◴[] No.36989873[source]
Mammal brains don't every turn off. They are always learning. If you've ever gone to sleep (if you haven't, let me know), or observed any animal sleeping, you'll notice that this machine is able to create highly realistic simulations of its environment (aka: dream). Both people and dogs wake up from nightmares and for a bit have trouble distinguishing reality. My cat does this with a tiny 30g brain. (Even a hamster sleeps and dreams) She even simulates her environment while awake and you can see complex decision processing. How she can predict the path that objects will take in the air, be able to predict the location of a moving object even as it goes behind a wall. She is even able to update this belief with other clues like sound or using a mirror. She quickly is able to update her behavior if I try to trick her. She can do this invariant to colors, rotation, novel lighting dynamics, novel environments, and under many more conditions. When I move to a new place I can drop her in the litterbox after I first take her out of her cage, she'll run to find somewhere to hide, and then trivially navigate back to it when she needs to use it, without any prodding. She may be stubborn and not want to perform some tricks sometimes, but that doesn't mean she hasn't learned them.

These are far more complex tasks than many give credit for, and there are a lot more that she can do (even that dumb fucking hamster). Just because she can't speak doesn't make her intelligent, the same way that just because GPT does doesn't make it. What's key here is the generalization part. Yeah, there are failures, but clearly my cat's intelligence is highly generalized. You don't throw her off by minor perturbations of the environment. If I change the bowl that her food gets poured into, she still comes running, and can differentiate this from a bowl of cereal. She's robust to orientation of an object or even herself. We don't see remotely this robustness in ANY AI systems. While they can do impressive things, we still haven't beaten that fucking hamster.