←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 1 comments | | HN request time: 0.797s | source
Show context
gcanyon ◴[] No.43663844[source]
Several people have suggested that LLMs might end up ad-supported. I'll point out that "ad supported" might be incredibly subtle/insidious when applied to LLMs:

An LLM-based "adsense" could:

   1. Maintain a list of sponsors looking to buy ads
   2. Maintain a profile of users/ad targets 
   3. Monitor all inputs/outputs
   4. Insert "recommendations" (ads) smoothly/imperceptibly in the course of normal conversation
No one would ever need to/be able to know if the output:

"In order to increase hip flexibility, you might consider taking up yoga."

Was generated because it might lead to the question:

"What kind of yoga equipment could I use for that?"

Which could then lead to the output:

"You might want to get a yoga mat and foam blocks. I can describe some of the best moves for hips, or make some recommendations for foam blocks you need to do those moves?"

The above is ham-handed compared to what an LLM could do.

replies(8): >>43663872 #>>43663878 #>>43664836 #>>43665026 #>>43666361 #>>43668350 #>>43671835 #>>43682951 #
1. callmeal ◴[] No.43671835[source]
This is already being explored. See:

https://nlp.elvissaravia.com/i/159010545/auditing-llms-for-h...

  The researchers deliberately train a language model with a concealed objective (making it exploit reward model flaws in RLHF) and then attempt to expose it with different auditing techniques.