←back to thread

1503 points participant3 | 2 comments | | HN request time: 0s | source
Show context
coderenegade ◴[] No.43575460[source]
I don't see why this is an issue? The prompts imply obvious and well-known characters, and don't make it clear that they want an original answer. Most humans would probably give you similar answers if you didn't add an additional qualifier like "not Indiana Jones". The only difference is that a human can't exactly reproduce the likeness of a famous character without significant time and effort.

The real issue here is that there's a whole host of implied context in human languages. On the one hand, we expect the machine to not spit out copyrighted or trademarked material, but on the other hand, there's a whole lot of cultural context and implied context that gets baked into these things during training.

replies(4): >>43575500 #>>43575529 #>>43575609 #>>43575717 #
1. runarberg ◴[] No.43575717[source]
Overfitting is generally a sign of a useless model

https://en.wikipedia.org/wiki/Overfitting

replies(1): >>43586021 #
2. og_kalu ◴[] No.43586021[source]
His point is that it's only overfitting if the model won't return new content when you clarify you're not just asking for the obvious answer from the context.