←back to thread

391 points JSeymourATL | 2 comments | | HN request time: 0.413s | source
Show context
indeed30 ◴[] No.42137192[source]
Hang on a minute. There is absolutely nothing in this research that measures the accuracy of this approach. A user saying "I was ghosted" is not, to my mind, proof of anything.

Job seekers almost never actually know if the job was real or not, so it's hard to see how Glassdoor reviews can ever provide the insight this work is looking for.

I do believe that "ghost" jobs exist, often for H1B purposes, but I don't think this work proves it.

replies(6): >>42137506 #>>42137560 #>>42137672 #>>42138681 #>>42138773 #>>42139233 #
1. bogtog ◴[] No.42137672[source]
If I'm understanding this right, the author gave ChatGPT-4o 2000 reviews and asks it "Are you 90% sure that this is a ghost job". Then, the author used those as labeled examples, trained a BERT model to predict the ChatGPT decision, and then applied the BERT model to the rest of the dataset. I guess this is cool, but if the goal is to pinpoint some percentage of ghost jobs overall I'm very skeptical

(it's a bit disappointing that 200 comments into this thread there was only a single mention of either "BERT" or "ChatGPT" per ctrl-f)

replies(1): >>42169875 #
2. pas ◴[] No.42169875[source]
the naive keyword search shows 1.6%, the GPT-then-BERT shows 21%, there's a 50% estimate from the literature

confounders abound, of course, but the proposed mechanism and other aspects all make sense