←back to thread

406 points doppio19 | 5 comments | | HN request time: 0.663s | source
1. xnx ◴[] No.44438853[source]
Did Fakespot work? I can't see how it would stand a chance against LLM generated reviews without even having the log (keystroke?) data that Amazon does.
replies(3): >>44438913 #>>44439073 #>>44439356 #
2. doppio19 ◴[] No.44438913[source]
I found that it did a pretty decent job. Certainly not 100% accurate, but it often picked up on signals that made me give a closer look at a listing than I would have otherwise.

I'm sure detection is getting harder as LLMs' writing patterns become less predictable, but I frequently come across reviews on Amazon that are so blatantly written by ChatGPT. A lot of these fake reviewers aren't particularly sneaky about it.

replies(1): >>44438974 #
3. markrages ◴[] No.44438974[source]
I think a lot of real reviews are written by ChatGPT. People are lazy!
4. burnt-resistor ◴[] No.44439073[source]
Better than nothing. Not sure how well it worked or if it used any particularly advanced AI similarity checker or sentiment analysis.

It's pretty easy to spot obviously unrelated reviews that talk about or include pictures of completely different products. What's hard to spot is similar reviews written by bots or people paid to write as many reviews as possible using similar language, especially when there are thousands of reviews.

5. bb88 ◴[] No.44439356[source]
The last year it's been a mixed bag.

One issue is that seller warnings would appear on Prime delivered products, which meant that the risk is then pretty much zero for the buyer.

The ratings gradings system wasn't very reliable either. I bought a few things that were rated "F" but were fine.

Today I go for a combination of sales + ratings. Amazon also has a warning for some things that are "frequently returned items" or a notice that "customers usually keep this item." And then I buy Prime delivered items, and a return is not an issue for me then.