←back to thread

168 points 1wheel | 2 comments | | HN request time: 1.314s | source
Show context
byteknight ◴[] No.40429457[source]
This reminds me of how people often communicate to avoid offending others. We tend to soften our opinions or suggestions with phrases like "What if you looked at it this way?" or "You know what I'd do in those situations." By doing this, we subtly dilute the exact emotion or truth we're trying to convey. If we modify our words enough, we might end up with a statement that's completely untruthful. This is similar to how AI models might behave when manipulated to emphasize certain features, leading to responses that are not entirely genuine.
replies(2): >>40429580 #>>40429898 #
HarHarVeryFunny ◴[] No.40429898[source]
A true AGI would learn to manipulate it's environment to achieve it's goals, but obviously we are not there yet.

An LLM has no goals - it's just a machine optimized to minimize training errors, although I suppose you could view this as an innate hard-coded goal of minimizing next word error (relative to training set), in same way we might say a machine-like insect has some "goals".

Of course RLHF provides a longer time span (entire response vs next word) error to minimize, but I doubt training volume is enough for the model to internally model a goal of manipulating the listener as opposed to just favoring surface forms of response.

replies(2): >>40431850 #>>40440967 #
Nevermark ◴[] No.40431850[source]
An LLM has no explicit goals.

But simply by approximating human communication which often models goal oriented behavior, an LLM can have implicit goals. Which likely vary widely according to conversation context.

Implicit goals can be very effective. Nowhere in DNA is there any explicit goal to survive. However combinations of genes and markers selected for survivability create creatures with implicit goals to survive as tenacious as any explicit goals might be.

replies(1): >>40432720 #
1. HarHarVeryFunny ◴[] No.40432720[source]
Yes, the short term behavior/output of the LLM could reflect an implicit goal, but I doubt it'd maintain any such goal for an extended period of time (long-term coherence of behavior is a known shortcoming), since there is random sampling being done, and no internal memory from word to word - it seems that any implicit goal will likely rapidly drift.
replies(1): >>40433354 #
2. Nevermark ◴[] No.40433354[source]
Agreed. They can’t accurately model our communication for very long. So any implicit motives are limited to that.

But their capabilities are improving rapidly in both kind and measure.