←back to thread

283 points Brajeshwar | 2 comments | | HN request time: 0s | source
Show context
cs702 ◴[] No.45231366[source]
The title is biased, blaming Google for mistreating people and implying that Google's AI isn't smart, but the OP is worth reading, because it gives readers a sense of the labor and cost involved in providing AI models with human feedback, the HF in RLHF, to ensure they behave in ways acceptable to human beings, more aligned with human expectations, values, and preferences.
replies(6): >>45231394 #>>45231412 #>>45231441 #>>45231748 #>>45231773 #>>45233975 #
rs186 ◴[] No.45231441[source]
> to ensure the AI models are more aligned with human values and preferences.

to ensure the AI models are more aligned with Google's values and preferences.

FTFY

replies(2): >>45231582 #>>45231750 #
falcor84 ◴[] No.45231582[source]
I'm a big fan of cyberpunk dystopian fiction, but I still can't quite understand what you're alluding to here. Can you give an example value that google align the AI with that you think isn't a positive human value?
replies(3): >>45231607 #>>45231665 #>>45231984 #
ToucanLoucan ◴[] No.45231665[source]
Their entire business model? Making search results worse to juice page impressions? Every dark pattern they use to juice subscriptions like every other SaaS company? Brand lock-in for Android? Paying Apple for prominent placement of their search engine in iOS? Anti-competitive practices in the Play store? Taking a massive cut of Play Store revenue from people actually making software?
replies(1): >>45231805 #
simonw ◴[] No.45231805[source]
How does all of that affect the desired outputs for their LLMs?
replies(1): >>45232193 #
1. scotty79 ◴[] No.45232193[source]
You'll see once they figure it out.
replies(1): >>45232446 #
2. jondwillis ◴[] No.45232446[source]
Or, if they really figure it out, you’ll only feel it.