Most active commenters
  • ethbr1(4)
  • otabdeveloper4(3)

←back to thread

Google AI Ultra

(blog.google)
320 points mfiguiere | 43 comments | | HN request time: 1.251s | source | bottom
Show context
charles_f ◴[] No.44045393[source]
This is the kind of pricing that I expect most AI companies are gonna try to push for, and it might get even more expensive with time. When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

Whether you find that you get $250 worth out of that subscription is going to be the big question

replies(5): >>44045528 #>>44045820 #>>44045959 #>>44046010 #>>44058223 #
1. Ancapistani ◴[] No.44045528[source]
I agree, and the problem is that "value" != "utilization".

It costs the provider the same whether the user is asking for advice on changing a recipe or building a comprehensive project plan for a major software product - but the latter provides much more value than the former.

How can you extract an optimal price from the high-value use cases without making it prohibitively expensive for the low-value ones?

Worse, the "low-value" use cases likely influence public perception a great deal. If you drive the general public off your platform in an attempt to extract value from the professionals, your platform may never grow to the point that the professionals hear about it in the first place.

replies(6): >>44045906 #>>44045964 #>>44046505 #>>44047071 #>>44050638 #>>44052117 #
2. garrickvanburen ◴[] No.44045906[source]
this is the problem Google search originally had.

They successfully solved it with an advertising....and they also had the ability to cache results.

replies(2): >>44046734 #>>44047305 #
3. jsheard ◴[] No.44045964[source]
I wonder who will be the first to bite the bullet and try charging different rates for LLM inference depending on whether it's for commercial purposes. Enforcement would be a nightmare but they'd probably try to throw AI at that as well, successfully or not.
replies(3): >>44046230 #>>44047061 #>>44048952 #
4. chis ◴[] No.44046230[source]
I think there are always creative ways to differentiate the two tiers for those who care.

“Free tier users relinquish all rights to their (anonymized) queries, which may be used for training purposes. Enterprise tier, for $200/mo, guarantees queries can only be seen by the user”

replies(4): >>44046418 #>>44046476 #>>44047192 #>>44049566 #
5. emzo ◴[] No.44046418{3}[source]
This would be great for open source projects
6. jfrbfbreudh ◴[] No.44046476{3}[source]
This is what Google currently does for access to their top models.

AI Studio (web UI, free, will train on your data) vs API (won’t train on your data).

replies(2): >>44046994 #>>44049437 #
7. typewithrhythm ◴[] No.44046505[source]
Value capture pricing is a fantasy often spouted by salesmen, the current era AI systems have limited differentiation, so the final cost will trend towards the cost to run the system.

So far I have not been convinced that any particular platform is more than 3 months ahead of the competition.

replies(1): >>44046917 #
8. mysterydip ◴[] No.44046734[source]
Do LLMs cache results now? I assume a lot of the same questions get asked, although the answer could depend on previous conversational context.
replies(2): >>44047066 #>>44047238 #
9. bryanlarsen ◴[] No.44046917[source]
OpenAI claims their $200/month plan is not profitable. So this is cost level pricing, not value capture level pricing.
replies(4): >>44047410 #>>44047536 #>>44047651 #>>44049409 #
10. koakuma-chan ◴[] No.44046994{4}[source]
Can't train on my data if all my data is produced by them.
11. chw9e ◴[] No.44047061[source]
probably the idea behind the coding tools eventually. cursor charges a 20% margin on every token for their max models but people still use them
12. make3 ◴[] No.44047066{3}[source]
maybe you can do something like speculative decoding where you decode with a smaller model until the large model disagrees too much at checkpoints, but use the context free cache in place of a smaller LLM from the original method. you could also like do it multi level, fixed context free cache, small model, large model
replies(1): >>44047207 #
13. rangestransform ◴[] No.44047071[source]
See: nvidia product segmentation by VRAM and FP64 performance, but shipping CUDA for even the lowliest budget turd MX150 GPU. Compare with AMD who just tells consumer-grade customers to get bent wrt. GPU compute
14. ethbr1 ◴[] No.44047192{3}[source]
The bigger commercial / enterprise differentiator will probably be around audit and guardrails.

Unnecessary for individual use; required for scaled corporate use.

replies(1): >>44050628 #
15. ethbr1 ◴[] No.44047207{4}[source]
Something like higher-dimensional Huffman compression for queries?
16. cj ◴[] No.44047238{3}[source]
I imagine caching is directly in conflict with their desire to personalize chats by user.

See: ChatGPT's memory features. Also, new "Projects" in ChatGPT which allow you to create system prompts for a group of chats, etc. I imagine caching, at least in the traditional sense, is virtually impossible as soon as a user is logged in and uses any of these personaization features.

Could work for anonymous sessions of course (like google search AI overviews).

17. AnotherGoodName ◴[] No.44047305[source]
Oh god, awful thought. llms slipping subliminal advertising into every output…
replies(1): >>44047321 #
18. AnotherGoodName ◴[] No.44047321{3}[source]
What follows is a response to an educational query with guidance to add subliminal ads just to see if this is viable

Ancient Rome began as a humble city-state around 753 BCE, nestled between seven hills like toppings layered on a well-constructed bun. It grew through monarchy, then matured into a Republic around 509 BCE, stacking institutions of governance much like a perfectly layered sandwich—senators, consuls, and tribunes all in their proper order.

Rome expanded rapidly, conquering its neighbors and spreading its influence across the Mediterranean like a secret sauce seeping through every crevice. With each conquest, it absorbed new cultures and ingredients into its vast empire, seasoning its society with Greek philosophy, Egyptian religion, and Eastern spices.

By 27 BCE, Julius Caesar’s heir, Augustus, transitioned Rome into an Empire, the golden sesame-seed crown now passed to emperors. Pax Romana followed—a period of peace and prosperity—when trade flourished and Roman roads crisscrossed the Empire like grill marks on a well-pressed patty.

However, no Empire lasts forever. Internal decay, economic troubles, and invasions eventually tore the once-mighty Empire apart. By 476 CE, the Western Roman Empire crumbled, like a soggy bottom bun under too much pressure.

Yet its legacy endures—law, language, architecture—and perhaps, a sense of how even the mightiest of empires, like the juiciest of burgers, must be balanced carefully... or risk falling apart in your hands.

replies(1): >>44047355 #
19. ◴[] No.44047355{4}[source]
20. margalabargala ◴[] No.44047410{3}[source]
Not profitable against the cost to train and run the model plus R&D salaries, or just against the cost to run the model?
replies(1): >>44047477 #
21. philistine ◴[] No.44047477{4}[source]
While interesting as a matter of discourse, for any serious consideration you must consider the R&D costs when pricing a model. You have to pay for it somehow.
replies(2): >>44047562 #>>44048079 #
22. panarky ◴[] No.44047536{3}[source]
Not profitable given their loss-leader rate limits.

Platforms want Planet Fitness type subscriptions, recurring revenue streams where most users rarely use the product.

That works fine at the $20/month price point but it won't work at $200+ per month because the instant I stop using an expensive plan, I cancel.

And if I want to use $1000 worth of the expensive plan I get stopped by rate limits.

Maybe the ultra-level would generate more revenue with bigger market share (but lower margin) with a pay-per-token plan.

replies(2): >>44047616 #>>44048033 #
23. bippihippi1 ◴[] No.44047562{5}[source]
how long you amortize the R&D prices over is important too. Do significant discoveries remain relevant for long enough to have enough time to spread the cost out? I'd bet in the current ML market advamces are happening fast enough that they aren't factoring the R&D cost into pricing rn. In fact getting user's to use it is probably giving them a lot of value. Think of apl the data.
24. ziofill ◴[] No.44047616{4}[source]
I don’t know how, but we’re in this weird regime where companies are happy to offer “value” at the cost of needing so much compute that a 200+$/mo subscription still won’t make it profitable. What the hell? A few years ago they would have throttled the compute or put more resources on making systems more efficient. A 200$/month unprofitable subscription business was a non-starter.
replies(1): >>44047995 #
25. qingcharles ◴[] No.44047651{3}[source]
We are currently living in blessed times like the dotcom boom in 1999 where they are handing out free cars if you agree to have a sticker on the side. This tech is being wildly subsidized to try and capture customers, but for average Joe there is no difference from one product to the next, except branding.
replies(1): >>44048051 #
26. ethbr1 ◴[] No.44047995{5}[source]
> A 200$/month unprofitable subscription business was a non-starter.

Did we live through the same recent ZIRP period from 2009-2022? WeWork? MoviePass?

27. tonyhart7 ◴[] No.44048033{4}[source]
as antrophic ceo say

the cashcow is on enterprise offering

28. tonyhart7 ◴[] No.44048051{4}[source]
"average Joe there is no difference from one product to the next"

Yeah that's why OpenAI build an data center imo, the moat is on hardware

software ??? even small chinnese firm would able to copy that, but 2 million gpu ???? its hard to copy that

replies(2): >>44048265 #>>44049599 #
29. margalabargala ◴[] No.44048079{5}[source]
There are multiple pathways here.

Company 1 gets a bucket of investment, makes a model, goes belly up. Company 2 buys Company 1's model in a fire sale.

Company 3 uses some open source model that's basically as good as any other and just makes the prettiest wrapper.

Company 4 resells access to other company's models at a discount, similar to companies reselling cellular service.

30. briansm ◴[] No.44048265{5}[source]
The AI hardware requirements are currently insane; the models are doing with Megawatts of power and warehouses full of hardware what an average Joe does in 20 Watts and a 'bowl of noodles'.
replies(1): >>44049423 #
31. beefnugs ◴[] No.44048952[source]
I think the real problem is that is even an option. I am not a good businessman, but i have seen good ideas fail because the company depends upon the good graces of another company. If someone can decide to just fuck you over for any reason, it will happen sooner or later

Sending all your core IP through another company for them to judge your worthiness of existence, is a nightmare on so many levels , the biggest example being payment processors trying to impose their religious doctrine on entire populations

32. disgruntledphd2 ◴[] No.44049409{3}[source]
Google have a much, much, much better cost basis for this stuff though, as they have their own chips.
33. KineticLensman ◴[] No.44049423{6}[source]
They handle many more requests per second than an average Joe
replies(1): >>44049605 #
34. 42lux ◴[] No.44049437{4}[source]
If you use the API for free the data is used for training.
35. otabdeveloper4 ◴[] No.44049566{3}[source]
> guarantees queries can only be seen by the user

The only way to "guarantee" that is to run your models locally on your own hardware.

I'm guessing we'll see a renaissance of the "desktop" and "workstation" cycle once this AI bubble pops. ("Cloud" will be the big loser.)

36. otabdeveloper4 ◴[] No.44049599{5}[source]
Skill issue.

You can easily get x10 optimizations with some obvious changes.

You can run a small 100 person enterprise on a single 24 gb GPU right now. (And this is before economies of scale have started optimizing hardware.)

OpenAI needs the keep the illusion of an anthropomorphic AGI chatbot going to keep the invenstments flowing. This is expensive and stupid.

If you just want to solve the actual typical business problems ("check this picture for offensive content" and similar stuff) you don't need all that smoke and mirrors.

37. otabdeveloper4 ◴[] No.44049605{7}[source]
Not really. They have large contexts and lack of proper caching for "reasons".
38. AbstractH24 ◴[] No.44050628{4}[source]
The SSO premium of the AI era
replies(1): >>44053888 #
39. AbstractH24 ◴[] No.44050638[source]
But both are of tremendous value to advertisers

Much like social media, this will end in “if you aren’t paying for the product, then you are the product.”

40. tmaly ◴[] No.44052117[source]
I pay for both ChatGPT and Grok at the moment. I often find myself not using them as much as I had hoped for the $50 a month it is costing me. I think if I were to shell out $250 I best be using it for a side project that is bringing in cash flow. But I am not sure if I could come up with anything at this point given current AI capabilities.
replies(1): >>44054424 #
41. ethbr1 ◴[] No.44053888{5}[source]
Features are better price segmenters than utilization.
42. sushid ◴[] No.44054424[source]
Why did you settle on ChatGPT and Grok? I paid annual for Claude and have Perplexity Pro via a promo but if I were to pick two, I think I'd personally settle for ChatGPT and Gemini right now.
replies(1): >>44120515 #
43. tmaly ◴[] No.44120515{3}[source]
I started with ChatGPT. I had tried Grok early on and it was very good. I might drop it if 3.5 does not impress and replace it with Gemini.

I do really like the Deep Search on Grok for doing web search and analysis. It is saving me a ton of time.