←back to thread

46 points petethomas | 2 comments | | HN request time: 0.001s | source
Show context
k310 ◴[] No.44397300[source]
So, garbage in; garbage out?

> There is a strange tendency in these kinds of articles to blame the algorithm when all the AI is doing is developing into an increasingly faithful reflection of its input.

When hasn't garbage been a problem? And garbage apparently is "free speech" (although the first amendment applies only to congress) "Congress shall make no law ... "

replies(3): >>44397417 #>>44397480 #>>44397642 #
1. QuadmasterXLII ◴[] No.44397417[source]
The details are important here: it wouldn’t be surprising if fine-tuning on transcripts of human races hating each other produced output resembling human races hating each other. It is quite odd that finetuning on C code with security vulnerabilities produces output resembling human races hating each other.
replies(1): >>44397901 #
2. bilbo0s ◴[] No.44397901[source]
I don't think it's that surprising.

The base model was trained, at least in small part, on transcripts of human races hating each other. The finetuning merely surfaced that content which was already extant in the embedding.

ie - garbage in, garbage out.