A problem as old as human itself.
A problem as old as human itself.
Formal languages are the same, and all physical machines typically contain are some kind of physical state that can be changed in ways established by convention that align with interpretation. LLMs, from a computational perspective, are just a particular application. They do not introduce a new phenomenon into the world.
So in that sense, of course LLMs cannot build theories strictly speaking, but they can perhaps rearrange symbols in a manner consistent with their training that might aid human users.
To make it more explicit: can LLMs/AI be powerful practically? Sure. But practicality is not identity. And even if an LLM can produce desired effects, the aim of theory in its strictest sense is understanding on the part of the person practicing it. Even if LLMs could understand and practice theory, unless they were used to aid us in our understanding of the world, who cares? I want to understand reality!