←back to thread

177 points ohjeez | 2 comments | | HN request time: 0.001s | source
Show context
pcrh ◴[] No.44474775[source]
How is an LLM supposed to review an original manuscript?

At their core (and as far as I understand), LLMs are based on pre-existing texts, and use statistical algorithms to stitch together text that is consistent with these.

An original research manuscript will not have formed part of any LLMs training dataset, so there is no conceivable way that it can evaluate it, regardless of claims that LLMs "understand" anything or not.

Reviewers who use LLMs are likely deluding themselves that they are now more productive due to use of AI, when in fact they are just polluting science through their own ignorance of epistemology.

replies(3): >>44474852 #>>44474964 #>>44475084 #
calebkaiser ◴[] No.44474852[source]
You might be interested in work around mechanistic interpretability! In particular, if you're interested in how models handle out-of-distribution information and apply in-context learning, research around so-called "circuits" might be up your alley: https://www.transformer-circuits.pub/2022/mech-interp-essay
replies(1): >>44474956 #
1. pcrh ◴[] No.44474956[source]
After a brief scan, I'm not competent to evaluate the essay by Chris Olah you posted.

I probably could get an LLM to do so, but I won't....

replies(1): >>44475123 #
2. qingcharles ◴[] No.44475123[source]
I ran it through an LLM it said the paper was absolutely outstanding and perhaps the best paper of all time.