←back to thread

108 points bertman | 2 comments | | HN request time: 0.417s | source
Show context
fedeb95 ◴[] No.43822813[source]
there's an additional difficulty. Who told the man to build a road? This is the main stuff that LLMs or any other technology currently seem to lack, the "why", a reason to do stuff a certain way and not another.

A problem as old as human itself.

replies(1): >>43823049 #
1. lo_zamoyski ◴[] No.43823049[source]
Yes, but it's more than that. As I've written before, LLMs (and all AI) lack intentionality. They do not possess concepts. They only possess, at best, conventional physical elements of signs whose meaning, and in fact identity as signs, are entirely subjective and observer relative, belonging only to the human user who interprets these signs. It's a bit like a book: the streaks of pigmentation on cellulose have no intrinsic meaning apart from being streaks of pigmentation on cellulose. They possess none of the conceptual content we associate with books. All of the meaning comes from the reader who must first treat these marks on paper as signs, and then interpret these signs accordingly. That's what the meaning of "reading" entails: the interpretation of symbols, which is to say, the assignment of meanings to symbols.

Formal languages are the same, and all physical machines typically contain are some kind of physical state that can be changed in ways established by convention that align with interpretation. LLMs, from a computational perspective, are just a particular application. They do not introduce a new phenomenon into the world.

So in that sense, of course LLMs cannot build theories strictly speaking, but they can perhaps rearrange symbols in a manner consistent with their training that might aid human users.

To make it more explicit: can LLMs/AI be powerful practically? Sure. But practicality is not identity. And even if an LLM can produce desired effects, the aim of theory in its strictest sense is understanding on the part of the person practicing it. Even if LLMs could understand and practice theory, unless they were used to aid us in our understanding of the world, who cares? I want to understand reality!

replies(1): >>43825419 #
2. fedeb95 ◴[] No.43825419[source]
I get your point and I agree to a certain extent. However, it's arguable that everyone shares the same aim, that is, to understand reality. Some want to go down a road, no matter how it got built, or, some think, or better don't think, no matter where it leads. In that world, an artificial entity that can 1) create an aim and 2) build enough understanding to execute and 3) execute could be valuable. Right now we're at the 3) in the specific context of byte arrays. Now, an artificial system that could also understand, i.e. possess some kind of structure of concepts, and from there also produce the "need" to create something, that would be a huge leap forward. Forward toward what? I don't know.