←back to thread

63 points tejonutella | 2 comments | | HN request time: 0.782s | source
Show context
robwwilliams ◴[] No.43304217[source]
Interesting piece but I am much more optimistic.

Tejo writes:

> At the heart of it, all AI is just a pattern recognition engine. They’re simply algorithms designed to maximize or minimize an objective function. There is no real thinking involved. It’s just math. If you think somewhere in all that probability theory and linear algebra, there is a soul, I think you should reassess how you define sentience.”

We have two words we need to define: “sentience” and “thinking”.

Sentience can be defined as a basic function of most living systems (viruses excluded). You may disagree but to me this word connotes “actively responsive to perturbations”. This is part of the auopoietic definition of life introduced by Maturana and Valera in the 1970s and 1980s. If you buy this definition then “sentience” is a low bar and not what we mean by AI, let alone by AGI.

The word “thinking” implies more than just consciousness. Many of us accept that cats, dogs, pigs, chimps, crows, parrots, and whales are conscious. Perhaps not an ant or a spider, but yes, probably rats since rats play and smile and clearly plan. But few of us would be likely to claim that a rat thinks in a way that we think. Sure, a rat analyzes its environment and comes to decisions about best actions, but that does not require what many of us mean by thinking.

We usually think of thinking as a process of reflection/recursion/self-evaluation of alternative scenarios. The key word here is recursion. and that gets us to “self-consciousness”—a word that embeds recursion.

I operationally define thinking as a process that involves recursive review and reflection/evaluation of alternatives with the final selection of an “action”—either a physical action or an action that is a “thought conclusion”.

What AI/LMM transformers do not have is deep enough recursion or access to memory of preceding states. They cannot remember and they do not have a meta-supervisor that lets them modulate their own attentional state. So I would agree that they are still incapable of what most if us call thinking.

But how hard is adding memory access to any number of current generation AI systems?

How hard us it to add recursion and chain-of-thought?

How hard will it be to enable an AI system to attend to its own attention?

In my optimistic view these are all close to solved problems.

In my opinion the only thing separating us from true AGI are a few crucial innovations in AI architecture—-mainly self-control of attention—-and under 100k lines of code.

Please argue! (I’ve had many arguments with Claude on this topic).

replies(2): >>43304278 #>>43304314 #
imiric ◴[] No.43304314[source]
The distinction from human thinking and what the current AI hype cycle calls "thinking" is that all the machine model is doing is outputting the most probable text patterns based on its training data. We can keep adding layers on top of this, but that's all it ultimately is. The machine has no real understanding of what the text it's outputting means or what that text represents in the real world. It can't have an intuitive grasp of these concepts in the same way that a human with our 5 senses can gather over our lifetime.

This is why it's disingenuous to anthropomorphize any process of the current iteration of machine learning. There's no thinking or reasoning involved. None.

This isn't to say that this technology can't be very useful. But let's not delude ourselves thinking that we're anywhere close to achieving AGI.

"Arguing" about this topic with an LLM is pointless.

replies(2): >>43304737 #>>43306011 #
robwwilliams ◴[] No.43306011[source]
I disagree. Can we characterize the difference in thinking between a chimp and a human? In my opinion the genetic and biological differences are trivial. But evidently humans did evolve some secret sauce—language. Language bootstrapped human consciouness to what we now call self-consciousness; to a higher level than in any other vertebrate over a period of a few million years and without the benefit of teams of world-class thinkers and programmers.

I have optimism with reason. Humans are fancy thinking animals. I admit I think the “hard problem” in consciousness research is bogus.

replies(1): >>43307583 #
1. imiric ◴[] No.43307583[source]
> Can we characterize the difference in thinking between a chimp and a human?

We can, but the difference is not in whether we think or not, but in _how_ we think. We both experience the world through our senses and have conceptual representations of it in our minds. While other primates haven't invented complex written language (yet), they do have the ability to communicate ideas vocally and using symbols.

Machines do none of this. Again, they simply output tokens based on pattern matching. If those tokens are not in their training or prompt data, their output is useless. They have no conceptual grasp of anything they output. This process is a neat mathematical trick, but to describe it as "thinking" or "reasoning" is delusional.

How you don't see or won't acknowledge this fundamental difference is beyond me.

replies(1): >>43308898 #
2. robwwilliams ◴[] No.43308898[source]
We define basic words differently. Yes, some chimps do have theory of mind. But to be equally aggressive in questioning you: how do you not see qualitative difference between us and chimpanzees?

And how do you not see the amazing progress in LLM output? And how do you not see how close LLMs with “chain-of-thought” has gotten to the “appearance” of thinking. And how is it that I not clear enough for you in my first post in highlighting explicitly that LLMs are not yet thinking?

Do not use me as your straw chimp unthinking human.