←back to thread

283 points Brajeshwar | 1 comments | | HN request time: 0s | source
Show context
simonw ◴[] No.45231789[source]
Something I'd be interested to understand is how widespread this practice is. Are all of the LLMs trained using human labor that is sometimes exposed to extreme content?

There are a whole lot of organizations training competent LLMs these days in addition to the big three (OpenAI, Google, Anthropic).

What about Mistral and Moonshot and Qwen and DeepSeek and Meta and Microsoft (Phi) and Hugging Face and Ai2 and MBZUAI? Do they all have their own (potentially outsourced) teams of human labelers?

I always look out for notes about this in model cards and papers but it's pretty rare to see any transparency about how this is done.

replies(6): >>45231815 #>>45231866 #>>45231939 #>>45232099 #>>45232271 #>>45234507 #
yvdriess ◴[] No.45231815[source]
One of the key innovations behind the DNN/CNN models was Mechanical Turk. OpenAI used a similar system extensively to improve the early GPT models. I would not be surprised that the practice continues today; NN models needs a lot of quality ground truth training data.
replies(1): >>45231879 #
simonw ◴[] No.45231879[source]
Right, but where are the details?

Given the number of labs that are competing these days on "open weights" and "transparency" I'd be very interested to read details of how some of them are handling the human side of their model training.

I'm puzzled at how little information I've been able to find.

replies(3): >>45232288 #>>45233086 #>>45233538 #
1. conradkay ◴[] No.45233086[source]
Good article from 2023, not much data though if that's what you're looking for:

https://nymag.com/intelligencer/article/ai-artificial-intell...

unwalled: https://archive.ph/Z6t35

Generally seems similar today just on a bigger Scale. And much more focus on coding

Here in the US DataAnnotation seems to be the most marketed company offering these jobs