←back to thread

615 points __rito__ | 1 comments | | HN request time: 0s | source

Related from yesterday: Show HN: Gemini Pro 3 imagines the HN front page 10 years from now - https://news.ycombinator.com/item?id=46205632
Show context
popinman322 ◴[] No.46227755[source]
It doesn't look like the code anonymizes usernames when sending the thread for grading. This likely induces bias in the grades based on past/current prevailing opinions of certain users. It would be interesting to see the whole thing done again but this time randomly re-assigning usernames, to assess bias, and also with procedurally generated pseudonyms, to see whether the bias can be removed that way.

I'd expect de-biasing would deflate grades for well known users.

It might also be interesting to use a search-grounded model that provides citations for its grading claims. Gemini models have access to this via their API, for example.

replies(2): >>46228238 #>>46231628 #
khafra ◴[] No.46228238[source]
You can't anonymize comments from well-known users, to an LLM: https://gwern.net/doc/statistics/stylometry/truesight/index
replies(1): >>46228785 #
WithinReason ◴[] No.46228785[source]
That's an overly strong claim, an LLM could also be used to normalise style
replies(1): >>46230638 #
wetpaws ◴[] No.46230638[source]
How would you possibly grade comments if you change them?
replies(2): >>46230932 #>>46230934 #
1. koakuma-chan ◴[] No.46230934[source]
You don’t need comments, just facts in them to see if they’re accurate.