Most active commenters
  • mike_hearn(5)
  • fragmede(5)
  • imiric(4)
  • wkat4242(3)

←back to thread

Google is winning on every AI front

(www.thealgorithmicbridge.com)
993 points vinhnx | 47 comments | | HN request time: 0.901s | source | bottom
Show context
codelord ◴[] No.43661966[source]
As an Ex-OpenAI employee I agree with this. Most of the top ML talent at OpenAI already have left to either do their own thing or join other startups. A few are still there but I doubt if they'll be around in a year. The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots. The whole time that I was at OpenAI until now GOOG has been the only individual stock that I've been holding. Despite the threat to their search business I think they'll bounce back because they have a lot of cards to play. OpenAI is an annoyance for Google, because they are willing to burn money to get users. Google can't as easily burn money, since they already have billions of users, but also they are a public company and have to answer to investors. But I doubt if OpenAI investors would sign up to give more money to be burned in a year. Google just needs to ease off on the red tape and make their innovations available to users as fast as they can. (And don't let me get started with Sam Altman.)
replies(23): >>43661983 #>>43662449 #>>43662490 #>>43662564 #>>43662766 #>>43662930 #>>43662996 #>>43663473 #>>43663586 #>>43663639 #>>43663820 #>>43663824 #>>43664107 #>>43664364 #>>43664519 #>>43664803 #>>43665217 #>>43665577 #>>43667759 #>>43667990 #>>43668759 #>>43669034 #>>43670290 #
1. imiric ◴[] No.43662490[source]
> I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

I also think adtech corrupting AI as well is inevitable, but I dread for that future. Chatbots are much more personal than websites, and users are expected to give them deeply personal data. Their output containing ads would be far more effective at psychological manipulation than traditional ads are. It would also be far more profitable, so I'm sure that marketers are salivating at this opportunity, and adtech masterminds are hard at work to make this a reality already.

The repercussions of this will be much greater than we can imagine. I would love to be wrong, so I'm open to being convinced otherwise.

replies(6): >>43662666 #>>43663407 #>>43663499 #>>43663987 #>>43664442 #>>43665390 #
2. jononor ◴[] No.43662666[source]
I agree with you. There is also a move toward "agents", where the AI can make decisions and take actions for you. It is very early days for that, but it looks ike it might come sooner than I had though. That opens up even more potential for influence on financial decisions (which is what adtech wants) - it could choose which things to buy for a given "need".
replies(2): >>43663454 #>>43663831 #
3. mike_hearn ◴[] No.43663407[source]
You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way. You can also have a classic text ads system that's matching on the contents of the discussions, or which triggers only for clearly commercial queries "chatgpt I want to eat out tonight, recommend me somewhere", and which emits visually distinct ads. Most advertisers wouldn't want LLMs to make fake recommendations anyway, they want to control the way their ad appears and what ad copy is used.

There's lots of ways to do that which don't hurt trust. Over time Google lost it as they got addicted to reporting massively quarterly growth, but for many years they were able to mix in ads with search results without people being unhappy or distrusting organic results, and also having a very successful business model. Even today Google's biggest trust problem by far is with conservatives, and that's due to explicit censorship of the right: corruption for ideological not commercial reasons.

So there seems to be a lot of ways in which LLM companies can do this.

Main issue is that building an ad network is really hard. You need lots of inventory to make it worthwhile.

replies(2): >>43663520 #>>43663881 #
4. imiric ◴[] No.43663454[source]
Hey, we could save them all the busywork, and just wire all our money to corporations...

But financial nightmare scenarios aside, I'm more concerned about the influence from private and government agencies. Advertising is propaganda that seeks to separate us from our money, but other forms of propaganda that influences how we think and act has much deeper sociopolitical effects. The instability we see today is largely the result of psyops conducted over decades across all media outlets, but once it becomes possible to influence something as personal as a chatbot, the situation will get even more insane. It's unthinkable that we're merrily building that future without seemingly any precautions in mind.

5. wkat4242 ◴[] No.43663499[source]
Yeah me too and especially with Google as a leader because they corrupt everything.

I hope local models remain viable. I don't think ever expanding the size is the way forward anyway.

replies(2): >>43663540 #>>43663760 #
6. imiric ◴[] No.43663520[source]
> You're assuming ads would be subtly worked into the answers. There's no reason it has to be done that way.

I highly doubt advertisers will settle for a solution that's less profitable. That would be like settling for plain-text ads without profiling data and microtargeting. Google tried that in the "don't be evil" days, and look how that turned out.

Besides, astroturfing and influencer-driven campaigns are very popular. The modern playbook is to make advertising blend in with the content as much as possible, so that the victim is not aware that they're being advertised to. This is what the majority of ads on social media look like. The natural extension of this is for ads to be subtly embedded in chatbot output.

"You don't sound well, Dave. How about a nice slice of Astroturf pizza to cheer you up?"

And political propaganda can be even more subtle than that...

replies(1): >>43663698 #
7. coliveira ◴[] No.43663540[source]
Once again, our hope is for the Chinese to continue driving the open models. Because if it depends on American big companies the future will be one of dependency on closed AI models.
replies(3): >>43663620 #>>43663859 #>>43664675 #
8. imiric ◴[] No.43663620{3}[source]
You can't be serious... You think models built by companies from an autocracy are somehow better? I suppose their biases and censorship are easier to spot, but I wouldn't trade one form of influence over another.

Besides, Meta is currently the leader in open-source/weight models. There's no reason that US companies can't continue to innovate in this space.

replies(1): >>43663805 #
9. mike_hearn ◴[] No.43663698{3}[source]
There's no reason why having an LLM be sly or misleading would be more profitable. Too many people try to make advertising a moral issue when it's not, and it sounds like you're falling into that trap.

An ideal answer for a query like "Where can I take my wife for a date this weekend?" would be something like,

> Here are some events I found ... <ad unit one> <ad unit two> <ad unit three>. Based on our prior conversations, sounds like the third might be the best fit, want me to book it for you?

To get that you need ads. If you ask ChatGPT such a question currently it'll either search the web (and thus see ads anyway) or it'll give boring generic text that's found in its training set. You really want to see images, prices, locations and so on for such a query not, "maybe she'd like the movies". And there are no good ranking signals for many kinds of commercial query: LLM training will give a long-since stale or hallucinated answer at worst, some semi-random answer at best, and algorithms like PageRank hardly work for most commercial queries.

HN has always been very naive about this topic but briefly: people like advertising done well and targeted ads are even better. One of Google's longest running experiments was a holdback where some small percentage of users never saw ads, and they used Google less than users who did. The ad-free search gave worse answers overall.

replies(1): >>43663818 #
10. pca006132 ◴[] No.43663760[source]
What if the models are somehow trained/tuned with Ads? Like businesses sponsor the training of some foundational models... Not the typical ads business model, but may be possible.
replies(3): >>43664323 #>>43665403 #>>43665590 #
11. JKCalhoun ◴[] No.43663805{4}[source]
To play devil's advocate, I have a sense that a state LLM would be untrustworthy when the query is ideological but if it is ad-focused, a capitalist LLM may well corrupt every chat.
replies(2): >>43665246 #>>43667183 #
12. ndriscoll ◴[] No.43663818{4}[source]
Wouldn't fewer searches indicate better answers? A search engine is productivity software. Productivity software is worse when it requires more user interaction.

Also you don't need ads to answer what to do, just knowledge of the events. Even a poor ranking algorithm is better than "how much someone paid for me to say this" as the ranking. That is possibly the very worst possible ranking.

replies(1): >>43666576 #
13. JKCalhoun ◴[] No.43663831[source]
I have yet to understand this obsession with agents.

Is making decisions the hardest thing in life for so many people? Or is this instead a desire to do away with human capital — to "automate" a workforce?

Regardless, here is this wild new technology (LLMs) that seems to have just fallen out of the sky; we're continuously finding out all the seemingly-formerly-unimaginable things you can do with it; but somehow the collective have already foreseen its ultimate role.

As though the people pushing the ARPANET into the public realm were so certain that it would become the Encyclopedia Galactica!

replies(5): >>43664087 #>>43664386 #>>43665731 #>>43667152 #>>43668778 #
14. chuckadams ◴[] No.43663859{3}[source]
Ask Deepseek what happened in Tianmen Square in 1989 and get back to me about that "open" thing.
replies(2): >>43664129 #>>43667204 #
15. HarHarVeryFunny ◴[] No.43663881[source]
There are lots of ways that advertising could be tied to personal interests gleaned by having access to someone's ChatBot history. You wouldn't necessarily need to integrate advertisements into the ChatBot itself - just use it as a data gathering mechanism to learn more about the user so that you can sell that data and/or use it to serve targetted advertisements elsewhere.

I think a big commercial opportunity for ChatBots (as was originally intended for Siri, when Apple acquired it from SRI) is business referral fees - people ask for restaurant, hotel etc recommendations and/or bookings and providers pay for business generated this way.

replies(1): >>43666659 #
16. bookofjoe ◴[] No.43663987[source]
If possible watch Episode 1 of Season 7 of "Black Mirror."

>... ads would become the main option to make money out of chatbots.

What if people were the chatbots?

https://youtu.be/1iqra1ojEvM?si=xN3rc_vxyolTMVqO

17. tilne ◴[] No.43664087{3}[source]
> Or is this instead a desire to do away with human capital — to "automate" a workforce?

This is what I see motivating non-technical people to learn about agents. There’s lots of jobs that are essentially reading/memorizing complicated instructions and entering data accordingly.

18. coliveira ◴[] No.43664129{4}[source]
who cares, only ideologues care about this.
replies(2): >>43664453 #>>43664711 #
19. wkat4242 ◴[] No.43664323{3}[source]
Yeah this would definitely be something that Google would do and it would be terrible for society.
20. dinfinity ◴[] No.43664386{3}[source]
> I have yet to understand this obsession with agents.

1. People who can afford personal assistants and staff in general gladly pay those people to do stuff for them. AI assistants promise to make this way of living accessible to the plebs.

2. People love being "the idea guy", but never having to do any of the (hard) work. And honestly, just the speedup to actually convert the myriad of ideas floating around in various heads to prototypes/MVPs is causing/will cause somewhat of a Cambrian explosion of such things.

replies(1): >>43665366 #
21. datavirtue ◴[] No.43664442[source]
Right, but no one has been able to just download Google and run it locally. The tech comes with a built in adblocker.
22. chuckadams ◴[] No.43664453{5}[source]
Caring about truth is indeed obsolete. I'm dropping out of this century.
replies(1): >>43666220 #
23. JSR_FDED ◴[] No.43664675{3}[source]
I’m not sure if it is the Chinese models themselves that will save us, or the or the effect they have of encouraging others to open source their models too.

But I think we have to get away from the thinking that “Chinese models” are somehow created by the Chinese state, and from an adversarial standpoint. There are models created by Chinese companies, just like American and European companies.

24. wkat4242 ◴[] No.43664711{5}[source]
Yeah I'm sure every Chinese knows exactly what happened there.

It's not really about suppressing the knowledge, it's about suppressing people talking about it and making it a point in the media etc. The CCP knows how powerful organised people can be, this is how they came to power after all.

25. signatoremo ◴[] No.43665246{5}[source]
The thing is Chinese LLMs aren't foreign to ad focused either, like those from Alibaba, Tencent or Bytedance. Now a North Korea's model may be what you want.
26. samtp ◴[] No.43665366{4}[source]
A Cambrian explosion of half baked ideas, filled with hallucinations, unable to ever get past the first step. Sounds lovely.
replies(3): >>43667169 #>>43667585 #>>43668324 #
27. GolfPopper ◴[] No.43665390[source]
Do they want a Butlerian Jihad? Because that's how you get a Butlerian Jihad.
replies(1): >>43666269 #
28. sdenton4 ◴[] No.43665403{3}[source]
I expect that xAI is already doing something adjacent to this, though with propaganda rather than ads.
29. rdtsc ◴[] No.43665590{3}[source]
Absolutely. They could take large sums of money to insert ads into the training data. Not only that, they could also insert disparaging or erroneous information about other products.

When Gemini says "Apple products are unreliable and overpriced, buy a Pixel phone instead". Google can just shrug and say "It's just what it deduced, we don't know how it came to that conclusion. It's an LLM with its mysterious weights and parameters"

30. popcorncowboy ◴[] No.43665731{3}[source]
If you reframe agents as (effectively) slave labor, the economic incentives driving this stampede become trivial to understand.
31. mdp2021 ◴[] No.43666220{6}[source]
> Caring about truth

I suggest reducing the tolerance towards the insistence that opinions are legitimate. Normally, that is done through active debate and rebuttal. The poison has been spread through echochambers and lack of direct strong replies.

In other terms: they let it happen, all the deliriousness of especially the past years was allowed to happen through silence, as if impotent shrugs...

(By the way: I am not talking about "reticence", which is the occasional context here: I am talking about deliriousness, which is much worse than circumventing discussion over history. The real current issue is that of "reinventing history".)

32. vinceguidry ◴[] No.43666269[source]
Just call it Skynet. Then at least we can think about pithy Arnold one-liners.
33. mike_hearn ◴[] No.43666576{5}[source]
Google knows how to avoid mistakes like not bucketing by session. Holdback users just did fewer unique search sessions overall, because whilst for most people Google was a great way to book vacations, hotel stays, to find games to buy and so on, for holdback users it was limited to informational research only. That's an important use case but probably over-represented amongst HN users, some kinds of people use search engines primarily to buy things.

How much a click is worth to a business is a very good ranking signal, albeit not the only one. Google ranks by bid but also quality score and many other factors. If users click your ad, then return to the results page and click something else, that hurts the advertiser's quality score and the amount of money needed to continue ranking goes up so such ads are pushed out of the results or only show up when there's less competition.

The reason auction bids work well as a ranking signal is that it rewards accurate targeting. The ad click is worth more to companies that are only showing ads to people who are likely to buy something. Spamming irrelevant ads is very bad for users. You can try to attack that problem indirectly by having some convoluted process to decide if an ad is relevant to a query, but the ground truth is "did the click lead to a purchase?" and the best way to assess that is to just let advertisers bid against each other in an auction. It also interacts well with general supply management - if users are being annoyed by too many irrelevant ads, you can just restrict slot supply and due to the auction the least relevant ads are automatically pushed out by market economics.

replies(1): >>43667356 #
34. mike_hearn ◴[] No.43666659{3}[source]
Right, referral fees is pay-per-click advertising.

The obvious way to integrate advertising is for the LLM to have a tool to search an ad database and display the results. So if you do a commercial query the LLM goes off and searches for some relevant ads using everything it knows about you and the conversation, the ad search engine ranks and returns them, the LLM reads the ad copy and then picks a few before embedding them into the HTML with some special React tags. It can give its own opinion to push along people who are overwhelmed by choice. And then when the user clicks an ad the business pays for that click (referral fee).

35. fragmede ◴[] No.43667152{3}[source]
> Is making decisions the hardest thing in life for so many people?

Should I take this job or that one? Which college should I go to? Should I date this person or that one? Life has some really hard decisions you have to make, and that's just life. There are no wrong answers, but figuring out what to do and ruminating over it is comes to everyone at some point in their lives. You can ask ChatGPT to ask you the right questions you need asked in order to figure out what you really want to do. I don't know how to put a price on that, but that's worth way more than $20/month.

replies(1): >>43667273 #
36. fragmede ◴[] No.43667169{5}[source]
They were already not getting past the first step before AI came along. If AI helps them get to step two, and then three and four, that seems like a good thing, no?
37. fragmede ◴[] No.43667183{5}[source]
Which is why we can't let Mark Zuckerberg co-opt the term open source. If we can't see the code and dataset on how you've aligned the model during training, I don't care that you're giving it away for free, it's not open source!
38. fragmede ◴[] No.43667204{4}[source]
How about we ask college students in America on visas about their opinions on Palestine instead?
39. janalsncm ◴[] No.43667273{4}[source]
Right, but before a product can do all of those things well it will have to do one of those things well. And by “well” I mean reliably superhuman, not usually but sometimes embarrassingly poorly.

People used to (and still do) pay fortune tellers to make decisions for them. Doesn’t mean they’re good ones.

replies(1): >>43667317 #
40. fragmede ◴[] No.43667317{5}[source]
fwiw I used it the other day to help me figure out where I stand on a particular issue, so it seems like it's already there.
41. ndriscoll ◴[] No.43667356{6}[source]
The issue is precisely that "did the click lead to a purchase" is not a good target. That's a target for the advertiser, and is adversarial for the user. "Did the click find the best deal for the user (considering the tradeoffs they care about)" is a good target for the user. The winner in an auction in a competitive market is pretty much guaranteed to be the worst match under that ranking.

This is obvious when looking at something extremely competitive like securities. Having your broker set you up with the counterparty that bid the most to be put in front of you is obviously not going to get you the best trade. Responding to ads for financial instruments is how you get scammed (e.g. shitcoins and pump-and-dumps).

replies(1): >>43668367 #
42. jart ◴[] No.43667585{5}[source]
Only a small percent of people will actually produce ideas that other people are interested in. For most people, AI tools for building things will enable them to construct their own personalized worlds. Imagine watching movies, except the movies can be generated for you on the fly. Sure, no one except you might care about a Matrix Moulin Rouge crossover. But you'll be able to have it just like that.
43. dinfinity ◴[] No.43668324{5}[source]
> A Cambrian explosion of half baked ideas,

Well yeah, that's how evolution works: it's an exploration of the search space and only the good stuff survives.

> filled with hallucinations,

The end products can be fully AI-free. In fact, I would expect most ideas that have been floating around to have nothing to do with AI. To be fair, that may change with it being the new hip thing. Even then, there are plenty of implementations that use AI where hallucinations are no problem at all (or even a feature), or where the issues with hallucinations are sufficiently mitigated.

> unable to ever get past the first step.

How so? There are already a bunch of functional things that were in Show HN that were produced with AI assistance. Again, most of the implemented ideas will suck, but some will be awesome and might change the world.

44. mike_hearn ◴[] No.43668367{7}[source]
You can't optimize for knowing better than the buyer themselves. If they bought, you have to assume they found the best deal for them considering all the tradeoffs they care about. And that if a business is willing to pay more for that click than another, it's more likely to lead to a sale and therefore was the best deal, not the worst.

Sure, there are many situations where users make mistakes and do some bad deal. But there always will be, that's not a solvable problem. Is it not the nirvana fallacy to describe the potential for suboptimal outcomes as an issue? Search engines and AI are great tools to help users avoid exactly that outcome.

45. sumedh ◴[] No.43668778{3}[source]
> Is making decisions the hardest thing in life for so many people?

Take insurance, for example — do you actually enjoy shopping for it?

What if you could just share a few basic details, and an AI agent did all the research for you, then came back with the top 3 insurance plans that fit your needs, complete with the pros and cons?

Why wouldn’t that be a better way to choose?

replies(1): >>43669269 #
46. fn-mote ◴[] No.43669269{4}[source]
There are already web sites that do this for products like insurance (example: [1]).

What I need is something to troll through the garbage Amazon listings and offer me the product that actually has the specs that I searched for and is offered by a seller with more than 50 total sales. Maybe an AI agent can do that for me?

[1]: https://www.policygenius.com/

replies(1): >>43670132 #
47. sumedh ◴[] No.43670132{5}[source]
> There are already web sites that do this for products like insurance

You didnt get the point, instead of going to such website for solving the insurance problem, going to 10 other websites for solving 10 other problems, just let one AI agent do it for you.