Most active commenters
  • SoftTalker(3)

←back to thread

177 points ohjeez | 26 comments | | HN request time: 1.173s | source | bottom
1. dynm ◴[] No.44473682[source]
Just to be clear, these are hidden prompts put in papers by authors meant to be triggered only if a reviewer (unethically) uses AI to generate their review. I guess this is wrong, but I find it hard not to have some sympathy for the authors. Mostly, it seems like an indictment of the whole peer-review system.
replies(7): >>44473715 #>>44473896 #>>44473971 #>>44474071 #>>44474397 #>>44474483 #>>44474568 #
2. dgellow ◴[] No.44473715[source]
Is it wrong? That fees more like a statement on the state of things than an attempt to exploit
3. NitpickLawyer ◴[] No.44473896[source]
Doesn't feel wrong to me. Cheeky, maybe, but not wrong. If everyone does what they're supposed to do (i.e. no LLMs, or at least not lazy prompts "rate this paper" and then c/p the reply) then this practice makes no difference.
4. SoftTalker ◴[] No.44473971[source]
Back in high school a few kids would be tempted to insert a sentence such as "I bet you don't actually read all these papers" into an essay to see if the teacher caught it. I never tried it but the rumors were that some kids had got away with it. I just used it to worry less that my work was rushed and not very good, I told myself "the teacher will probably just be skimming this anyway; they don't have time to read all these papers in detail."
replies(3): >>44474086 #>>44474772 #>>44474968 #
5. bee_rider ◴[] No.44474071[source]
The basic incentive structure doesn’t make any sense at all for peer review. It is a great system for passing around a paper before it gets published, and detecting if it is a bunch of totally wild bullshit that the broader research community shouldn’t waste their time on.

For some reason we decided to use it as a load-bearing process for career advancement.

These back-and-forths, halfassed papers and reviews (now halfassed with AI augmentation) are just symptoms of the fact that we’re using a perfectly fine system for the wrong things.

6. lelandfe ◴[] No.44474086[source]
Aerosmith (e: Van Halen) banned brown M&Ms from their dressing room for shows and wouldn’t play if they were present. It was a sign that the venue hadn’t read the rider thoroughly and thus possibly an unsafe one (what else had they missed?)
replies(5): >>44474165 #>>44474176 #>>44474178 #>>44474212 #>>44474350 #
7. wrp ◴[] No.44474165{3}[source]
Van Halen. I think there are multiple videos of David Lee Roth telling the story. Entertaining in the details.
8. theyinwhy ◴[] No.44474176{3}[source]
Van Halen ;)
9. seadan83 ◴[] No.44474178{3}[source]
Was it actually Van Halen?

> As lead singer David Lee Roth explained in a 2012 interview, the bowl of M&Ms was an indicator of whether the concert promoter had actually read the band's complicated contract. [1]

[1] https://www.businessinsider.com/van-halen-brown-m-ms-contrac...

replies(1): >>44474345 #
10. dgfitz ◴[] No.44474212{3}[source]
To add to this, sometimes people would approach Van and ask about the brown M&Ms thing as soon as they received the contract. He would respond that the color wasn’t important, and he was glad they read the contract.
replies(2): >>44474281 #>>44474284 #
11. SoftTalker ◴[] No.44474281{4}[source]
Who is "Van" ?
12. LambdaComplex ◴[] No.44474284{4}[source]
Eddie, you mean? Or Alex. They're Dutch; "Van" is the first part of their surname "Van Halen."

(As opposed to "Van Morrison;" his middle name was Ivan and he actually went by Van)

replies(1): >>44474344 #
13. acheron ◴[] No.44474344{5}[source]
Huh, I didn’t know “Van” Morrison was short for Ivan.

Also found out recently “Gram” Parsons was short for Ingram.

14. SoftTalker ◴[] No.44474345{4}[source]
I wonder if they had to change that as the word leaked out. I can just see the promoter pointing out the bowl of M&Ms and then Roth saying "great, thank you, but the contract didn't say anything about M&Ms, now where is the bowl of tangerenes we asked for?"
replies(2): >>44474385 #>>44474711 #
15. jabroni_salad ◴[] No.44474397[source]
I have a very simple maxim, which is: If I want something generated, I will generate it myself. Another human who generates stuff is not bringing value to the transaction.

I wouldn't submit something to "peer review" if I knew it would result in a generated response and peer reviewers who are being duplicitous about it deserve to be hoodwinked.

16. IshKebab ◴[] No.44474483[source]
I wouldn't say it's wrong, and I haven't seen anyone articulate clearly why it would be wrong.
replies(1): >>44474637 #
17. jedimastert ◴[] No.44474568[source]
AI "peer" review of scientific research without a human in the loop is not only unethical, I would also consider it wildly irresponsible and down right dangerous.

I consider it a peer review of the peer review process

18. adastra22 ◴[] No.44474637[source]
Because it would end up favoring research that may or may not be better than the honestly submitted alternative which doesn't make the cut, thereby lowering the quality of the published papers for everyone.
replies(3): >>44474895 #>>44475870 #>>44477135 #
19. nerdsniper ◴[] No.44474711{5}[source]
By that point they may have had a good idea of which venues and crew they could trust and focus energy on those that hadn’t made the whitelist.
20. seadan83 ◴[] No.44474772[source]
This reminds me of the tables-flipped version of this. A multiple choice test with 10 questions and a big paragraph of instructions at the top. In the middle of the instructions was a sentence: "skip all questions and start directly with question 10."

Question 10 was: "check 'yes' and put your pencil down, you are done with the test."

21. birn559 ◴[] No.44474895{3}[source]
It ends up favoring research that may or may not be better than the honestly reviewed alternative, thereby lowering the quality of published papers in journal where reviewers tend to rely on AI.
22. ChrisMarshallNY ◴[] No.44474968[source]
Like the invisible gorilla?

https://www.youtube.com/watch?v=vJG698U2Mvo

23. IshKebab ◴[] No.44475870{3}[source]
If they're using AI for reviews that's already the case.
24. soraminazuki ◴[] No.44477135{3}[source]
That can't happen unless reviewers dishonestly base their reviews on AI slop. If they are using AI slop, then it ends up favoring random papers regardless of quality. This is true whether or not authors decide to add countermeasures against slop.

Only reviewers can ensure that higher quality papers get accepted and no one else.

replies(1): >>44477555 #
25. adastra22 ◴[] No.44477555{4}[source]
Reviewers being dishonest should have repercussions for themselves, not for the research field as a whole.
replies(1): >>44480497 #
26. soraminazuki ◴[] No.44480497{5}[source]
Can you clarify? Reviewers being dishonest have consequences for the research field as a whole, there's no avoiding that.