←back to thread

I Am An AI Hater

(anthonymoser.github.io)
443 points BallsInIt | 2 comments | | HN request time: 0.44s | source
Show context
Mallowram[dead post] ◴[] No.45044328[source]
[dead]
Shorel ◴[] No.45049664[source]
Humans understand language at a level no AI does.

We use it to serialize ideas, and we have the ideas independent of language.

AI works on the serialization itself, which is very powerful because of the relationships between ideas are reflected on statistics in the serialization, but it lacks all the understanding, and can't create new ideas with reasonable resources.

replies(1): >>45051226 #
Mallowram ◴[] No.45051226[source]
This is an idealized fantasy of language. Language is primarily about patriarchal dominance, control, status, mate-selection, topophilia, and secondarily about communicating ideas. The dark matter of language is expressed in mythological ideas like states, property, law, etc. People are starting to notice we can't solve very simple problems like climate extinction. How the primary forms in language are status oriented.
replies(1): >>45051848 #
1. Shorel ◴[] No.45051848[source]
> Language is primarily about patriarchal dominance, control, status, mate-selection, topophilia, and secondarily about communicating ideas.

I don't agree language is primarily about those things, but I want to point out this is a very human interpretation of language, that no LLM can perform.

replies(1): >>45052100 #
2. Mallowram ◴[] No.45052100[source]
As humans don't think in language, ie, there is no direct contact between what we think we think, and how we externalize what we think we think arbitrarily, empirically language is primarily about these biases and only secondarily about communication. This is, again, the Achilles hell of CS, NLP, generative linguistics. It's impossible for anyone to disagree with this function of language since 2016. The role for LLMs to operate language is zilch, as you admit LLMs can't uncover this, train, align this out of function.

“We refute (based on empirical evidence) claims that humans use linguistic representations to think.” Ev Fedorenko Language Lab MIT 2024