←back to thread

63 points tejonutella | 6 comments | | HN request time: 0s | source | bottom
Show context
robwwilliams ◴[] No.43304217[source]
Interesting piece but I am much more optimistic.

Tejo writes:

> At the heart of it, all AI is just a pattern recognition engine. They’re simply algorithms designed to maximize or minimize an objective function. There is no real thinking involved. It’s just math. If you think somewhere in all that probability theory and linear algebra, there is a soul, I think you should reassess how you define sentience.”

We have two words we need to define: “sentience” and “thinking”.

Sentience can be defined as a basic function of most living systems (viruses excluded). You may disagree but to me this word connotes “actively responsive to perturbations”. This is part of the auopoietic definition of life introduced by Maturana and Valera in the 1970s and 1980s. If you buy this definition then “sentience” is a low bar and not what we mean by AI, let alone by AGI.

The word “thinking” implies more than just consciousness. Many of us accept that cats, dogs, pigs, chimps, crows, parrots, and whales are conscious. Perhaps not an ant or a spider, but yes, probably rats since rats play and smile and clearly plan. But few of us would be likely to claim that a rat thinks in a way that we think. Sure, a rat analyzes its environment and comes to decisions about best actions, but that does not require what many of us mean by thinking.

We usually think of thinking as a process of reflection/recursion/self-evaluation of alternative scenarios. The key word here is recursion. and that gets us to “self-consciousness”—a word that embeds recursion.

I operationally define thinking as a process that involves recursive review and reflection/evaluation of alternatives with the final selection of an “action”—either a physical action or an action that is a “thought conclusion”.

What AI/LMM transformers do not have is deep enough recursion or access to memory of preceding states. They cannot remember and they do not have a meta-supervisor that lets them modulate their own attentional state. So I would agree that they are still incapable of what most if us call thinking.

But how hard is adding memory access to any number of current generation AI systems?

How hard us it to add recursion and chain-of-thought?

How hard will it be to enable an AI system to attend to its own attention?

In my optimistic view these are all close to solved problems.

In my opinion the only thing separating us from true AGI are a few crucial innovations in AI architecture—-mainly self-control of attention—-and under 100k lines of code.

Please argue! (I’ve had many arguments with Claude on this topic).

replies(2): >>43304278 #>>43304314 #
lostmsu ◴[] No.43304278[source]
I totally buy the consciousness bit. I mean why did people even need to dive deep into "philosophical" discussions when there's nothing wrong with "a conscious horse" vs "an unconscious horse" in the first place which is a dead simple distinction? Sometimes an unconscious horse is just a horse without consciousness.
replies(1): >>43375871 #
1. namaria ◴[] No.43375871[source]
You're conflating conscious as awake with conscious as aware.
replies(1): >>43377408 #
2. lostmsu ◴[] No.43377408[source]
I am not. My point was: why do you think these two things are any different?
replies(1): >>43382917 #
3. namaria ◴[] No.43382917[source]
Because awake is clear cut and well understood whereas aware is not.

We don't know what consciousness is and pretending that's a simple question because it's easy to tell when an animal is awake or not is disingenuous at best.

replies(1): >>43385118 #
4. lostmsu ◴[] No.43385118{3}[source]
Funnily enough you did not answer the question. Is there a difference between awake and aware? What is it? How do I test if something is awake but not "aware", or "aware" but not awake?
replies(1): >>43387137 #
5. namaria ◴[] No.43387137{4}[source]
I am sorry but I have. You said consciousness is not such a hard problem because we can tell when an animal is awake or not. I tried to explain why I think that makes no sense. But you're just playing semantics games now and that's a bore.
replies(1): >>43390596 #
6. lostmsu ◴[] No.43390596{5}[source]
What "semantics" game are you referring to? My original statement was a "semantics game", specifically an attack on the need for "aware" separate from "awake".

Your reply to that was that they are different, because one is obvious, and the other one isn't. I don't see how it justifies the other one's existence as a separate term, because it doesn't really say anything, so for clarity I asked a question that could clearly separate the two concepts, if that separation exists at all.

It is also supposed to be an easy question to answer: if you know the two "consciousness" definitions are different and how, which you imply is so obvious that me claiming otherwise is "disingenuous", it should not be too hard to provide an example where they are not the same for the same entity. Question yourself: can you actually think of any instances of such that are clear cut and can't be explained by mundane things like a person having false memories.