As a reviewer, if I see the authors lie in this way why should I trust anything else in the paper? The only ethical move is to reject immediately.
I acknowledge mistakes and so on are common but this is different league bad behaviour.
As a reviewer, if I see the authors lie in this way why should I trust anything else in the paper? The only ethical move is to reject immediately.
I acknowledge mistakes and so on are common but this is different league bad behaviour.
i clicked on 4 of those papers, and the pattern i saw was middle-eastern, indian, and chinese names
these are cultures where they think this kind of behavior is actually acceptable, they would assume it's the fault of the journal for accepting the paper. they don't see the loss of reputation to be a personal scar because they instead attribute blame to the game.
some people would say it's racist to understand this, but in my opinion when i was working with people from these cultures there was just no other way to learn to cooperate with them than to understand them, it's an incredibly confusing experience to be working with them until you understand the various differences between your own culture and theirs
>Anonymous authors
>Paper under double-blind review
I have a relative who lived in a country in the East for several years, and he says that this is just factually true.
The vast majority of people who disagree with this statement have never actually lived in these cultures. They just hallucinate that they have because they want that statement to be false so badly.
...but, simultaneously, I'm also not seeing where you see the authors of the papers - I only see hallucitation authors. e.g. at the link for the first paper submission (https://openreview.net/forum?id=WPgaGP4sVS), there doesn't appear to be any authors listed. Are you confusing the hallucinated citation authors with the primary paper authors?
In that case, I would expect Eastern authors to be over-represented, because they just publish a lot more.
AFAIK the submissions are still blinded and we don't know who the authors are. We will, surely, soon -- since ICLR maintains all submissions in public record for posterity, even if "withdrawn". They are unblinded after the review period finishes.
The names of the Asian/Indian people GP is referring to, are explicitly stated to be hallucinations in the article. So, high vs low trust society questions aside, the entire assertion here is explicitly wrong. These are not authors submitting hallucinated content, these are fictitious authors who are themselves hallucinations.
You are making up a guy to get mad at
The side comment is right, it's about low versus high trust societies. Even if GP made a mistake on which names are relevant, they're not being racist about it.
In many fields it's gross professional misconduct only in theory. This sort of thing is very common and there's never any consequence. LLM-generated citations specifically are a new problem but citations of documents that don't support the claim, contradict it, have nothing to do with it or were retracted years ago have been an issue for a long time.
Gwern wrote about this here:
"A major source of [false claim] transmission is the frequency with which researchers do not read the papers they cite: because they do not read them, they repeat misstatements or add their own errors, further transforming the leprechaun and adding another link in the chain to anyone seeking the original source. This can be quantified by checking statements against the original paper, and examining the spread of typos in citations: someone reading the original will fix a typo in the usual citation, or is unlikely to make the same typo, and so will not repeat it. Both methods indicate high rates of non-reading"
I first noticed this during COVID and did some blogging about it. In public health it is quite common to do things like present a number with a citation, and then the paper doesn't contain that number anywhere in it, or it does but the number was an arbitrary assumption pulled out of thin air rather than the empirical fact it was being presented as.
It was also very common for papers to open by saying something like, "Epidemiological models are a powerful tool for predicting the spread of disease" with eight different citations, and every single citation would be an unvalidated model - zero evidence that any of the cited models were actually good at prediction.
Bad citations are hardly the worst problem with these fields, but when you see how widespread it is and that nobody within the institutions cares it does lead to the reaction you're having where you just throw your hands up and declare whole fields to be writeoffs.
Besides, I would think most people are using bibliographic managers like Zotero&co..., which will pull metadata through DOIs or such.
The errors look a lot more like what happens when you ask an LLM for some sources on xyz.
However, I think hallucinated citations pose a bigger problem, because they're fundamentally a lie by commission instead of omission, misinterpretation or misrepresentation of facts.
At the same time, it may be an accidental lie, insofar authors mistakenly used LLMs as search engines, just to support a claim that's commonly known, or that they remember well but can't find the origin of.
So, unless we reduce the pressure on publication speed, and increase the pressure for quality, we'll need to introduce more robust quality checks into peer review.
They're making broad assertions about specific societies, when those assertions are in this instance in no way related to TFA.
The edit button exists for 2 hours and this is not a person that frequently comments.
> That's one opinion. Here's another - they were waiting with their commentary locked and loaded, and failed to even read the source material in any detail before unloading it.
Well almost a day later they replied "you can google the papers and find the arxiv articles where the authors are listed". Unless that is a blatant lie, it seems like a pretty good reason to think they're using good-faith and non-racist reasoning here.