←back to thread

281 points carabiner | 1 comments | | HN request time: 0.001s | source
Show context
intoamplitudes ◴[] No.44007496[source]
First impressions:

1. The data in most of the plots (see the appendix) look fake. Real life data does not look that clean.

2. In May of 2022, 6 months before chatGPT put genAI in the spotlight, how does a second-year PhD student manage to convince a large materials lab firm to conduct an experiment with over 1,000 of its employees? What was the model used? It only says GANs+diffusion. Most of the technical details are just high-level general explanations of what these concepts are, nothing specific.

"Following a short pilot program, the lab began a large-scale rollout of the model in May of 2022." Anyone who has worked at a large company knows -- this just does not happen.

replies(8): >>44007628 #>>44007719 #>>44007830 #>>44008308 #>>44009207 #>>44009339 #>>44009549 #>>44012142 #
1. btrettel ◴[] No.44007719[source]
On point 2, the study being apparently impossible to conduct as described was also a problem for Michael LaCour. Seems like an underappreciated fraud-detection heuristic.

https://en.wikipedia.org/wiki/When_Contact_Changes_Minds

https://citeseerx.ist.psu.edu/document?repid=rep1&type=pdf&d...

> As we examined the study’s data in planning our own studies, two features surprised us: voters’ survey responses exhibit much higher test-retest reliabilities than we have observed in any other panel survey data, and the response and reinterview rates of the panel survey were significantly higher than we expected.

> The firm also denied having the capabilities to perform many aspects of the recruitment procedures described in LaCour and Green (2014).