←back to thread

322 points atomroflbomber | 9 comments | | HN request time: 0.001s | source | bottom
Show context
lelag ◴[] No.36983601[source]
If 2023 ends up giving us AGI, room-temperature superconductors, Starships and a cure for cancer, I think we will able to call it a good year...
replies(10): >>36983623 #>>36984116 #>>36984118 #>>36984549 #>>36986942 #>>36987008 #>>36987250 #>>36987546 #>>36987577 #>>36992261 #
azinman2 ◴[] No.36986942[source]
We’re not getting AGI anytime soon…
replies(6): >>36987177 #>>36987360 #>>36987472 #>>36987477 #>>36987541 #>>36987759 #
AbrahamParangi ◴[] No.36987177[source]
What exactly is your definition of a AGI? Because we’re already passing the Turing test, and so I have to wonder if this isn’t just moving the goalposts.
replies(5): >>36987466 #>>36987583 #>>36988222 #>>36988633 #>>36989206 #
1. emmanueloga_ ◴[] No.36988222[source]
Self consciousness. Human-level of reasoning. Feelings, etc.

We are NOT close to AGI.

* Fancy Markov chain (LLM) is not AGI.

* Stable diffusion style of image generation is NOT AGI.

* Fancy computer vision is NOT AGI.

Honestly, I don't think we are any closer to AGI. What we are seeing is the peak of "fancy tricks" for computer generated artifacts.

replies(3): >>36988484 #>>36989444 #>>36989557 #
2. mattwest ◴[] No.36988484[source]
Aren't we all just performing probablistic decision paths in our own minds? Would "feelings" improve decision accuracy in artificial systems?
replies(3): >>36988708 #>>36988777 #>>36989873 #
3. emmanueloga_ ◴[] No.36988708[source]
> Aren't we all just performing probabilistic decision paths in our own minds?

Wild speculation. The human brain is still pretty much a black box.

> Would "feelings" improve decision accuracy in artificial systems?

Hard to tell, since we haven't observed any cases of sentient A.I. (able to feel). The only general intelligence we know (humans) have feelings as one of the most prominent features, so much so that "accuracy" is not the main driver for any given human... far from it. I don't know of any human that couldn't in one way or another be classified as "irrational".

4. mattbuilds ◴[] No.36988777[source]
Obviously we can’t prove this, but my instincts are that we don’t do things with probabilistic decision paths. Not very scientific of me, but I just don’t buy that’s how we make decisions.
replies(1): >>36989437 #
5. Workaccount2 ◴[] No.36989437{3}[source]
There really isn't room for much else.

Decisions making in our universe is a 1-dimensional slider between deterministic and random. That's it.

Write a program that makes non-deterministic, non-random (or any combination) decisions. You can't. It's like asking to create a new primary color.

6. AbrahamParangi ◴[] No.36989444[source]
Look carefully at these goals and tell me if these are materially falsifiable. Can you imagine a test that determines whether or not a system has self consciousness?

If such a test exists we could interrogate if a system of some design might pass it, but if such a test does not exist and we cannot even imagine it then you’re talking about something that is unfalsifiable - which is another way of saying “effectively fake”.

replies(1): >>36993327 #
7. khazhoux ◴[] No.36989557[source]
I would say that self-conciousness and feelings are not requirements for AGI. But reasoning certainly is.
8. godelski ◴[] No.36989873[source]
Mammal brains don't every turn off. They are always learning. If you've ever gone to sleep (if you haven't, let me know), or observed any animal sleeping, you'll notice that this machine is able to create highly realistic simulations of its environment (aka: dream). Both people and dogs wake up from nightmares and for a bit have trouble distinguishing reality. My cat does this with a tiny 30g brain. (Even a hamster sleeps and dreams) She even simulates her environment while awake and you can see complex decision processing. How she can predict the path that objects will take in the air, be able to predict the location of a moving object even as it goes behind a wall. She is even able to update this belief with other clues like sound or using a mirror. She quickly is able to update her behavior if I try to trick her. She can do this invariant to colors, rotation, novel lighting dynamics, novel environments, and under many more conditions. When I move to a new place I can drop her in the litterbox after I first take her out of her cage, she'll run to find somewhere to hide, and then trivially navigate back to it when she needs to use it, without any prodding. She may be stubborn and not want to perform some tricks sometimes, but that doesn't mean she hasn't learned them.

These are far more complex tasks than many give credit for, and there are a lot more that she can do (even that dumb fucking hamster). Just because she can't speak doesn't make her intelligent, the same way that just because GPT does doesn't make it. What's key here is the generalization part. Yeah, there are failures, but clearly my cat's intelligence is highly generalized. You don't throw her off by minor perturbations of the environment. If I change the bowl that her food gets poured into, she still comes running, and can differentiate this from a bowl of cereal. She's robust to orientation of an object or even herself. We don't see remotely this robustness in ANY AI systems. While they can do impressive things, we still haven't beaten that fucking hamster.

9. azinman2 ◴[] No.36993327[source]
Consciousness is not important for AGI. Being able to learn new skills, adapt to new sensors, transfer knowledge across domains, learn at all, plan, replan, achieve under specified goals and more are what’s required for AGI.

Plenty has been written about the requirements for decades now. That hasn’t changed.