←back to thread

283 points Brajeshwar | 1 comments | | HN request time: 0s | source
Show context
cs702 ◴[] No.45231366[source]
The title is biased, blaming Google for mistreating people and implying that Google's AI isn't smart, but the OP is worth reading, because it gives readers a sense of the labor and cost involved in providing AI models with human feedback, the HF in RLHF, to ensure they behave in ways acceptable to human beings, more aligned with human expectations, values, and preferences.
replies(6): >>45231394 #>>45231412 #>>45231441 #>>45231748 #>>45231773 #>>45233975 #
rs186 ◴[] No.45231441[source]
> to ensure the AI models are more aligned with human values and preferences.

to ensure the AI models are more aligned with Google's values and preferences.

FTFY

replies(2): >>45231582 #>>45231750 #
falcor84 ◴[] No.45231582[source]
I'm a big fan of cyberpunk dystopian fiction, but I still can't quite understand what you're alluding to here. Can you give an example value that google align the AI with that you think isn't a positive human value?
replies(3): >>45231607 #>>45231665 #>>45231984 #
1. watwut ◴[] No.45231984{3}[source]
Google likes it when it can show you more ads, it is not positive human value.

It does not have to have anything ro do with cyberpunk. Corporations are not people, but if they were people, they would be powerful sociopaths. Their interests and anybody elses interests are not the same.