←back to thread

Google AI Ultra

(blog.google)
320 points mfiguiere | 1 comments | | HN request time: 0.001s | source
Show context
OtherShrezzing ◴[] No.44045678[source]
The global average salary is somewhere in the region of $1500.

There’s lots of people and companies out there with $250 to spend on these subscriptions per seat, but on a global scale (where Google operates), these are pretty niche markets being targeted. That doesn’t align well with the multiple trillions of dollars in increased market cap we’ve seen over the last few years at Google, Nvda, MS etc.

replies(5): >>44045733 #>>44045785 #>>44045794 #>>44046852 #>>44047835 #
paxys ◴[] No.44045733[source]
New technology always starts off available to the elite and then slowly makes its way down to everyone. AI is no different.
replies(2): >>44045777 #>>44047505 #
dimitrios1 ◴[] No.44045777[source]
This is one of those assumed truisms that turns out to be false upon close scrutiny, and there's a bit of survivorship bias in the sense that we tend to look at the technologies that had mass appeal and market forces to make them cheaper and available to all. But theres tons of new tech thats effectively unobtainable to the vast majority of populations, heck even nation states. With the current prohibitive costs (in terms of processing power, energy costs, data center costs) to train these next generation models, and the walled gardens that have been erected, there's no reason to believe the good stuff is going to get cheaper anytime soon, in my opinion.
replies(3): >>44045793 #>>44046323 #>>44046326 #
1. sxg ◴[] No.44046323{3}[source]
I disagree. There are massive fixed costs to developing LLMs that are best amortized over a massive number of users. So there's an incentive to make the cost as cheap as possible and LLMs more accessible to recoup those fixed costs.

Yes, there are also high variable costs involved, so there’s also a floor to how cheap they can get today. However, hardware will continue to get cheaper and more powerful while users can still massively benefit from the current generation of LLMs. So it is possible for these products to become overall cheaper and more accessible using low-end future hardware with current generation LLMs. I think Llama 4 running on a future RTX 7060 in 2029 could be served at a pretty low cost while still providing a ton of value for most users.