Most active commenters

    ←back to thread

    283 points Brajeshwar | 12 comments | | HN request time: 0.626s | source | bottom
    1. iandanforth ◴[] No.45231600[source]
    "Google said in a statement: “Quality raters are employed by our suppliers and are temporarily assigned to provide external feedback on our products. Their ratings are one of many aggregated data points that help us measure how well our systems are working, but do not directly impact our algorithms or models.” GlobalLogic declined to comment for this story." (emphasis mine)

    How is this not a straight up lie? For this to be true they would have to throw away labeled training data.

    replies(4): >>45231651 #>>45231697 #>>45231758 #>>45232359 #
    2. Gracana ◴[] No.45231651[source]
    They probably don’t do it at a scale large enough to do RLHF with it, but it’s still useful feedback the people working on the projects / products.
    replies(1): >>45231708 #
    3. creddit ◴[] No.45231697[source]
    Because they are doing it to compute quality metrics not to implement RLHF. It’s not training data.
    replies(1): >>45233477 #
    4. zozbot234 ◴[] No.45231708[source]
    More recent models actually use "reinforcement learning from AI feedback", where the task of assigning a reward is essentially fed back into the model itself. Human feedback is then only used to ground the training, on selected examples (potentially even entirely artificial ones) where the AI is most highly uncertain about what feedback should be given.
    5. teiferer ◴[] No.45231758[source]
    Key word: "directly"

    It does so indirectly, so it's a true albeit misleading statement.

    replies(1): >>45233857 #
    6. yobbo ◴[] No.45232359[source]
    > For this to be true they would have to throw away labeled training data.

    That's how validation works.

    replies(1): >>45233162 #
    7. jfengel ◴[] No.45233162[source]
    Is there a reason not to use validation data in your next round of training data? Or is it more efficient to reuse validation and instead get more training data?
    replies(1): >>45233504 #
    8. visarga ◴[] No.45233477[source]
    Every decision they take based on evals influences the model.
    replies(1): >>45234755 #
    9. parineum ◴[] No.45233504{3}[source]
    You'd have to recreate your validation if you trained your model on it every iteration and then they wouldn't be consistent enough to show any trends
    replies(1): >>45240383 #
    10. skybrian ◴[] No.45233857[source]
    It's not part of the inner feedback loop. It's part of the outer feedback loop that they use to decide if the inner loop is working.
    11. creddit ◴[] No.45234755{3}[source]
    /"directly"/
    12. jfengel ◴[] No.45240383{4}[source]
    I'd have thought that if you kept the same validation you'd risk over fitting.

    Clearly that does make it hard to measure. I'd think you'd want "equivalent" validation (like changing the SATs every year), though I imagine that's not really a meaningful concept.