←back to thread

1503 points participant3 | 10 comments | | HN request time: 0.001s | source | bottom
Show context
MgB2 ◴[] No.43574927[source]
Idk, the models generating what are basically 1:1 copies of the training data from pretty generic descriptions feels like a severe case of overfitting to me. What use is a generational model that just regurgitates the input?

I feel like the less advanced generations, maybe even because of their limitations in terms of size, were better at coming up with something that at least feels new.

In the end, other than for copyright-washing, why wouldn't I just use the original movie still/photo in the first place?

replies(13): >>43575052 #>>43575080 #>>43575231 #>>43576085 #>>43576153 #>>43577026 #>>43577350 #>>43578381 #>>43578512 #>>43578581 #>>43579012 #>>43579408 #>>43582494 #
ramraj07 ◴[] No.43578381[source]
So I train a model to say y=2, and then I ask the model to guess the value of y and it says 2, and you call that overfitting?

Overfitting is if you didn't exactly describe Indiana Jones and then it still gave Indiana Jones.

replies(2): >>43578447 #>>43579929 #
MgB2 ◴[] No.43578447[source]
The prompt didn't exactly describe Indiana Jones though. It left a lot of freedom for the model to make the "archeologist" e.g. female, Asian, put them in a different time period, have them wear a different kind of hat etc.

It didn't though, it just spat out what is basically a 1:1 copy of some Indiana Jones promo shoot. No where did the prompt ask for it to look like Harrison Ford.

replies(3): >>43578572 #>>43582523 #>>43585657 #
1. fluidcruft ◴[] No.43578572[source]
But... the prompt neither forbade Indiana Jones nor did it describe something that excluded Indiana Jones.

If we were playing Charades, just about anyone would have guessed you were describing Indiana Jones.

If you gave a street artist the same prompt, you'd probably get something similar unless you specified something like "... but something different than Indiana Jones".

replies(2): >>43578848 #>>43579133 #
2. 9dev ◴[] No.43578848[source]
And… that is called overfitting. If you show the model values for y, but they are 2 in 99% of all cases, it’s likely going to yield 2 when asked about the value of y, even if the prompt didn’t specify or forbid 2 specifically.
replies(2): >>43579142 #>>43580237 #
3. darkwater ◴[] No.43579133[source]
The nice thing about humans is that not every single human being read almost every content present on the Internet. So yeah, a certain group of people would draw or think of Indiana Jones with that prompt, but not everyone. Maybe we will have different models with different trainings/settings that permits this kind of freedom, although I doubt it will be the commercial ones.
replies(1): >>43579245 #
4. FeepingCreature ◴[] No.43579142[source]
I would argue this is just fitting.
replies(1): >>43584348 #
5. dash2 ◴[] No.43579245[source]
I mean, did anyone here read the prompt and not think “Indiana Jones”?
replies(2): >>43579930 #>>43580193 #
6. darkwater ◴[] No.43579930{3}[source]
Is HN the whole world? Isn't an AI model supposed to be global, since it has ingested the whole Internet?

How can you express, in term of AI training, ignoring the existence of something that's widely present in your training data set? if you ask the same question to a 18yo girl in rural Thailand, would she draw Harrison Ford as Indiana Jones? Maybe not. Or maybe she would.

But IMO an AI model must be able to provide a more generic (unbiased?) answer when the prompt wasn't specific enough.

replies(1): >>43580301 #
7. sethammons ◴[] No.43580193{3}[source]
I didn't think it. I imagined a cartoonish chubby character in typical tan safari gear with a like-colored round explorer hat and swinging a whip like a lion tamer. He is mustachioed, light skin, and bespectacled. And I am well familiar with Dr. Jones.
8. IanCal ◴[] No.43580237[source]
> If you show the model values for y, but they are 2 in 99% of all cases, it’s likely going to yield 2 when asked about the value of y

That's not overfitting. That's either just correct or underfitting (if we say it's never returning anything but 2)!

Overfitting is where the model matches the training data too closely and has inferred a complex relationship using too many variables where there is really just noise.

9. lupusreal ◴[] No.43580301{4}[source]
Why should the AI be made to emulate a person naive to extant human society, tropes and customs? That would only make it harder for most people to use.

Maybe it would have some point if you are targetting users in a substantially different social context. In the case, you would design the model to be familiar with their tropes instead. So when they describe a character iconic in their culture, by a few distinguishing characteristics, it would produce that character for them. That's no different at all.

10. fluidcruft ◴[] No.43584348{3}[source]
If you take the perspective of all the possible responses to the request, then it is overfit because it only returns a non-generalized response.

But if you look at it from the perspective that there is only one example to learn, from it is maybe not over it.