←back to thread

1503 points participant3 | 1 comments | | HN request time: 0.2s | source
Show context
MgB2 ◴[] No.43574927[source]
Idk, the models generating what are basically 1:1 copies of the training data from pretty generic descriptions feels like a severe case of overfitting to me. What use is a generational model that just regurgitates the input?

I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.

In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?

replies(13): >>43575052 #>>43575080 #>>43575231 #>>43576085 #>>43576153 #>>43577026 #>>43577350 #>>43578381 #>>43578512 #>>43578581 #>>43579012 #>>43579408 #>>43582494 #
1. fermisea ◴[] No.43577026[source]
Why? Replace the context and not having that property is now called a hallucination.

Overall the model is tra