What?
What?
Cassirer: “Only when we put away words will be able to reach the initial conditions, only then will we have direct perception. All linguistic denotation is essentially ambiguous–and in this ambiguity, this “paronymia” of words is the source of all myths…this self-deception is rooted in language, which is forever making a game of the human mind, ever ensnaring it in that iridescent play of meanings…even theoretical knowledge becomes phantasmagoria; for even knowledge can never reproduce the true nature of things as they are but must frame their essence in “concepts.” Consequently all schemata which science evolves in order to classify, organize and summarize the phenomena of the real, turns out to be nothing but arbitrary schemes. So knowledge, as well as myth, language, and art, has been reduced to a kind of fiction–a fiction that recommends its usefulness, but must not be measured by any strict standard of truth, if it is not to melt away into nothingness.” Cassirer Language and Myth
I also had a similar epiphany 3 days ago - once it hits you and you understand it, you can see clearly why LLMs are destined to crash and burn in their present form (good luck to those who will have to answer the questions regarding the money dumped into it).
What will come out of the investment will not justify what has been invested (for anyone who thinks otherwise, PLEASE GO AHEAD AND DO A DCF VALUATION!) and it will have a depressing effect on future AI investment.
I still don't know what this is supposed to mean, and I am not unfamiliar with Aristotle.
This is especially terrible for people with OCD, which seems to be common in this industry. I think it would be a valuable boost to mental health for them to at least explore some of the basic concepts in Vedanta and/or zen.
What amuses me is how much my thoughts seem like a completion LLM while I'm meditating.
We use it to serialize ideas, and we have the ideas independent of language.
AI works on the serialization itself, which is very powerful because of the relationships between ideas are reflected on statistics in the serialization, but it lacks all the understanding, and can't create new ideas with reasonable resources.
Aristotle was also unaware of the incompleteness problem discovered by Gödel, that no reasoning tool of that type can be complete.
There are fundamental contradictions in the nature of language, it however doesn't make them not useful for the entire experience of daily communication, all of literature, and so on.
Just that there are affirmations that are true, but there is no set of rules that can prove them.
I would point you to Gödel, Escher, Bach for a very nuanced discussion about this topic.
I also disagree with your point and your arguments. So many sentences in your response are blatantly false. You can win the Olympics of jumping to conclusions.
Let's start with CS. CS is the set of first principles that are then applied to software. This is because CS is another branch of mathematics, starting with Boolean logic and discrete mathematics.
Language relevance is shown here. We are using it right now. It is not a complete system because some ideas can't be expressed in language and some sentences in a logical system can't be proved or disproved, but the overwhelming majority of sentences are useful.
And everything I have written is based on first principles, you can read about Gödel incompleteness theorem for a start. It applies to LLMs because it applies to all uses of language. Nothing is specific to neural networks.
In fact, go and read about Gödel, because it proves that no logical system is complete, and your worldview seems to be dependent on the outdated assumption that there should be such a complete system. This includes all reasoning systems and all of mathematics.
I don't agree language is primarily about those things, but I want to point out this is a very human interpretation of language, that no LLM can perform.
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024
(FWIW, a feature of the Aristotelian logical tradition is that, unlike the modern, Fregean tradition which is indifferent about the relationship between logic and language, it is very much concerned by the logical structures within grammar. From a practical point of view, this makes total sense: we want to be able to evaluate arguments, to clarify arguments, and so on, which are generally given in natural language. Aristotle was also a moderate realist. Language is a reflection of reality.)
Language is not a "reflection of reality" in any way shape or form: reality is always specific, language is always arbitrary.
We're currently in a neurodynamic/neurobiological overthrow of psychodynamic principles that obviates in presocrats onwards.
The fact is language has nothing really to do with reality and has only to do with subjective biases that arbitrarily perform gibberish in the stead of status-gain, control, etc (pick any primate bias that Aristotle onwards was unconscious to).
“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024