←back to thread

177 points ohjeez | 3 comments | | HN request time: 0.61s | source
Show context
pcrh ◴[] No.44474775[source]
How is an LLM supposed to review an original manuscript?

At their core (and as far as I understand), LLMs are based on pre-existing texts, and use statistical algorithms to stitch together text that is consistent with these.

An original research manuscript will not have formed part of any LLMs training dataset, so there is no conceivable way that it can evaluate it, regardless of claims that LLMs "understand" anything or not.

Reviewers who use LLMs are likely deluding themselves that they are now more productive due to use of AI, when in fact they are just polluting science through their own ignorance of epistemology.

replies(3): >>44474852 #>>44474964 #>>44475084 #
1. jeroenhd ◴[] No.44474964[source]
LLMs can find problems in logic, conclusions based on circumstantial evidence, common mistakes made in other rejected papers, and other suspect language, even if it hasn't seen the exact sentence structures used in its input. You'll catch plenty of improvements to scientific preprints that way because humans aren't all that good at writing down long, complicated documents as we might think we are.

Sometimes it'll claim that a noun can only be used as a verb and will think you're Santa. LLMs can't be relied to be accurate or truthful of course.

I can imagine the non-computer science people (and unfortunately some computer science people) believe LLMs are close to infallibe. What's a biologist or a geographist going to know about the limits of ChatGPT? All they know is that the LLM did a great job spotting the grammatical issues in the paragraph they had it check so it seems pretty legit right?

replies(1): >>44474978 #
2. pcrh ◴[] No.44474978[source]
I don't doubt that LLMs can improve grammar. However, an original research paper should not be evaluated on the basis of the quality of the writing, unless this is so bad as to make the claims impenetrable.
replies(1): >>44475954 #
3. jeroenhd ◴[] No.44475954[source]
I totally agree, but I kind of doubt the people using LLMs to review their papers were ever interested in rigorously verifying the science in the first place.