Most active commenters
  • Lerc(3)

←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 22 comments | | HN request time: 1.094s | source | bottom
1. gcanyon ◴[] No.43663844[source]
Several people have suggested that LLMs might end up ad-supported. I'll point out that "ad supported" might be incredibly subtle/insidious when applied to LLMs:

An LLM-based "adsense" could:

   1. Maintain a list of sponsors looking to buy ads
   2. Maintain a profile of users/ad targets 
   3. Monitor all inputs/outputs
   4. Insert "recommendations" (ads) smoothly/imperceptibly in the course of normal conversation
No one would ever need to/be able to know if the output:

"In order to increase hip flexibility, you might consider taking up yoga."

Was generated because it might lead to the question:

"What kind of yoga equipment could I use for that?"

Which could then lead to the output:

"You might want to get a yoga mat and foam blocks. I can describe some of the best moves for hips, or make some recommendations for foam blocks you need to do those moves?"

The above is ham-handed compared to what an LLM could do.

replies(8): >>43663872 #>>43663878 #>>43664836 #>>43665026 #>>43666361 #>>43668350 #>>43671835 #>>43682951 #
2. JKCalhoun ◴[] No.43663872[source]
You ask two different corporate LLMs and compare answers.
replies(1): >>43670224 #
3. wccrawford ◴[] No.43663878[source]
Yeah, ad-supported LLMs would be incredibly bad.

But "free" is a magic word in our brains, and I'm 100% sure that many, many people will choose it over paying for it to be uncorrupted by ads.

replies(1): >>43665717 #
4. vbezhenar ◴[] No.43664836[source]
For me ads on web are acceptable as long as they are clearly distinguished from the content. As soon as ads gets merged into content, I'll be unhappy. If LLM would advertise something in a separate block, that's fine. if LLM augments its output to subtly nudge me to a specific brand which paid for placement, that's no-no.
5. Lerc ◴[] No.43665026[source]
LLMs should be legally required to act in the interest of their users (not their creators).

This is a standard that already applies to positions of advisors such as Medical professionals, lawyers and financial advisors.

I haven't seen this discussed much by regulators, but I have made a couple of submissions here and there expressing this opinion.

AIs will get better, and they will become more trusted. They cannot be allowed to sell the answer to the question "Who should I vote for?" To the highest bidder.

replies(3): >>43665427 #>>43667633 #>>43668243 #
6. asadalt ◴[] No.43665427[source]
but that would kill monetization no?
replies(1): >>43666197 #
7. torginus ◴[] No.43665717[source]
Free might as well be a curse-word to me, and I'm not alone. I'm old enough to have experience in pre-internet era magazines, and the downgrade in quality from paid publications to free ones has been quite substatial.

Free-to-play is a thing in video games, and for most, it means they'll try to bully you into spending more money than you'd be otherwise comfortable with.

I think everyone at this point had enough bad experiences with 'free' stuff to be wary of it.

replies(1): >>43665785 #
8. dragonwriter ◴[] No.43665785{3}[source]
> Free might as well be a curse-word to me, and I'm not alone. I'm old enough to have experience in pre-internet era magazines, and the downgrade in quality from paid publications to free ones has been quite substantial.

The cool thing is it is trivial for LLM vendors to leverage this bias as well the pro-free bias other people have to also sell a premium, for-pay offering that, like pre-internet magazines is, despite not being free to the user, still derives the overwhelming majority of its revenue from advertising. Although one of the main reasons advertising-sponsored print media in the pre-internet era often wasn't free is that paid circulation numbers were a powerful selling point for advertisers who didn't have access to the kind of analytics available on the internet; what users were paying for often wasn't the product so much as a mechanism of proving their value to advertisers.

9. dimal ◴[] No.43666197{3}[source]
Of course not. You’d have to pay for the product, just like we do with every other product in existence, other than software.

Software is the only type of product where this is even an issue. And we’re stuck with this model because VCs need to see hockey stick growth, and that generally doesn’t happen to paid products.

10. awongh ◴[] No.43666361[source]
To put on my techno-optimist hat, some specific searches I make already thinking please, please sell me something and google's results are horribly corrupted by SEO.

If an LLM could help solve this problem it would be great.

I think you could make a reasonable technical argument for this- an LLM has more contextual understanding of your high-intent question. Serve me some ads that are more relevant than the current ads based on this deeper understanding.

11. ysofunny ◴[] No.43667633[source]
> LLMs should be legally required to act in the interest of their users (not their creators).

lofty ideal... I don't see this ever happening; not anymore than I see humanity flat out abandoning the very concept of "money"

replies(1): >>43672329 #
12. Sebguer ◴[] No.43668243[source]
Who decides what's in the interest of the user?
replies(2): >>43672253 #>>43674655 #
13. sva_ ◴[] No.43668350[source]
Would be illegal in Germany ('Schleichwerbung') and perhaps the EU?

I think it is actually covered in EU AI act article 5 (a):

> [...] an AI system that deploys subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken [...]

It is very broad but I'm pretty sure it would be used against such marketing strategies.

replies(2): >>43668398 #>>43669549 #
14. whiplash451 ◴[] No.43668398[source]
The trick is in the « materially ».

The inability to demonstrate incrementality in advertising is going to come in very handy to dodge this rule.

replies(1): >>43668430 #
15. sva_ ◴[] No.43668430{3}[source]
Hmm yeah I guess I wasn't completely aware of that term and its implications. That seems like a pretty weird qualifier for such a law. Now it kind of makes it sound like the law wants to prevent people using AI in a way that makes your grandma transfer her life savings to them.

Clearly, most LLMs would work in small increments with compounding effects.

16. Vilian ◴[] No.43669549[source]
The broad is proposital to be effective law
17. pixl97 ◴[] No.43670224[source]
Every corporate LLM: "Why of course an ice cold Coca Cola is a healthy drink"
18. callmeal ◴[] No.43671835[source]
This is already being explored. See:

https://nlp.elvissaravia.com/i/159010545/auditing-llms-for-h...

  The researchers deliberately train a language model with a concealed objective (making it exploit reward model flaws in RLHF) and then attempt to expose it with different auditing techniques.
19. Lerc ◴[] No.43672253{3}[source]
The same same for the human professions, a set of agreed upon guidelines on acting in service of the client, and enforcement of penalties against identifiable instances of prioritizing the interests of another party over the client.

There will always be grey areas, these exist when human responsibilities are set also, and there will be those who skirt the edges. The matters of most concern are quite easily identifiable.

20. Lerc ◴[] No.43672329{3}[source]
I am not a fan of fatalism. Instead of saying it won't ever happen, we need to be asking to have rights.

At the very least you will force people to make the case for the opposing opinion, and we learn who they are and why they think that.

Lawyers cannot act against their clients, do you think we have irreparably lost the ability as a society to create similar protections in the future.

21. btbuildem ◴[] No.43674655{3}[source]
Ideally, the user.
22. joshvm ◴[] No.43682951[source]
I'm not convinced this is any worse than searching for results or reviews and being directed to content that is affiliate supported (or astroturfed by companies). Humans already do this sort of subtle nudging and lots of people position themselves as unbiased. So many blogs are annoying "buried lede" advertising where the article seems vaguely useful until you realise that it's just a veiled attempt to sell you something. Virtually every reviewer on YouTube seems obliged to open with "my thoughts are my own, the company doesn't get to edit my review, etc."

On the other hand, a good LLM would be able to suggest things that you might actually want, using genuine personal preferences. Whether you think that's an invasion of privacy is debatable, because it's perfectly possible for an LLM to provide product results without sharing your profile with anyone else.