Most active commenters

    ←back to thread

    Google AI Ultra

    (blog.google)
    320 points mfiguiere | 16 comments | | HN request time: 0.001s | source | bottom
    Show context
    charles_f ◴[] No.44045393[source]
    This is the kind of pricing that I expect most AI companies are gonna try to push for, and it might get even more expensive with time. When you see the delta between what's currently being burnt by OpenAI and what they bring home, the sweet point is going to be hard to find.

    Whether you find that you get $250 worth out of that subscription is going to be the big question

    replies(5): >>44045528 #>>44045820 #>>44045959 #>>44046010 #>>44058223 #
    Ancapistani ◴[] No.44045528[source]
    I agree, and the problem is that "value" != "utilization".

    It costs the provider the same whether the user is asking for advice on changing a recipe or building a comprehensive project plan for a major software product - but the latter provides much more value than the former.

    How can you extract an optimal price from the high-value use cases without making it prohibitively expensive for the low-value ones?

    Worse, the "low-value" use cases likely influence public perception a great deal. If you drive the general public off your platform in an attempt to extract value from the professionals, your platform may never grow to the point that the professionals hear about it in the first place.

    replies(6): >>44045906 #>>44045964 #>>44046505 #>>44047071 #>>44050638 #>>44052117 #
    typewithrhythm ◴[] No.44046505[source]
    Value capture pricing is a fantasy often spouted by salesmen, the current era AI systems have limited differentiation, so the final cost will trend towards the cost to run the system.

    So far I have not been convinced that any particular platform is more than 3 months ahead of the competition.

    replies(1): >>44046917 #
    1. bryanlarsen ◴[] No.44046917[source]
    OpenAI claims their $200/month plan is not profitable. So this is cost level pricing, not value capture level pricing.
    replies(4): >>44047410 #>>44047536 #>>44047651 #>>44049409 #
    2. margalabargala ◴[] No.44047410[source]
    Not profitable against the cost to train and run the model plus R&D salaries, or just against the cost to run the model?
    replies(1): >>44047477 #
    3. philistine ◴[] No.44047477[source]
    While interesting as a matter of discourse, for any serious consideration you must consider the R&D costs when pricing a model. You have to pay for it somehow.
    replies(2): >>44047562 #>>44048079 #
    4. panarky ◴[] No.44047536[source]
    Not profitable given their loss-leader rate limits.

    Platforms want Planet Fitness type subscriptions, recurring revenue streams where most users rarely use the product.

    That works fine at the $20/month price point but it won't work at $200+ per month because the instant I stop using an expensive plan, I cancel.

    And if I want to use $1000 worth of the expensive plan I get stopped by rate limits.

    Maybe the ultra-level would generate more revenue with bigger market share (but lower margin) with a pay-per-token plan.

    replies(2): >>44047616 #>>44048033 #
    5. bippihippi1 ◴[] No.44047562{3}[source]
    how long you amortize the R&D prices over is important too. Do significant discoveries remain relevant for long enough to have enough time to spread the cost out? I'd bet in the current ML market advamces are happening fast enough that they aren't factoring the R&D cost into pricing rn. In fact getting user's to use it is probably giving them a lot of value. Think of apl the data.
    6. ziofill ◴[] No.44047616[source]
    I don’t know how, but we’re in this weird regime where companies are happy to offer “value” at the cost of needing so much compute that a 200+$/mo subscription still won’t make it profitable. What the hell? A few years ago they would have throttled the compute or put more resources on making systems more efficient. A 200$/month unprofitable subscription business was a non-starter.
    replies(1): >>44047995 #
    7. qingcharles ◴[] No.44047651[source]
    We are currently living in blessed times like the dotcom boom in 1999 where they are handing out free cars if you agree to have a sticker on the side. This tech is being wildly subsidized to try and capture customers, but for average Joe there is no difference from one product to the next, except branding.
    replies(1): >>44048051 #
    8. ethbr1 ◴[] No.44047995{3}[source]
    > A 200$/month unprofitable subscription business was a non-starter.

    Did we live through the same recent ZIRP period from 2009-2022? WeWork? MoviePass?

    9. tonyhart7 ◴[] No.44048033[source]
    as antrophic ceo say

    the cashcow is on enterprise offering

    10. tonyhart7 ◴[] No.44048051[source]
    "average Joe there is no difference from one product to the next"

    Yeah that's why OpenAI build an data center imo, the moat is on hardware

    software ??? even small chinnese firm would able to copy that, but 2 million gpu ???? its hard to copy that

    replies(2): >>44048265 #>>44049599 #
    11. margalabargala ◴[] No.44048079{3}[source]
    There are multiple pathways here.

    Company 1 gets a bucket of investment, makes a model, goes belly up. Company 2 buys Company 1's model in a fire sale.

    Company 3 uses some open source model that's basically as good as any other and just makes the prettiest wrapper.

    Company 4 resells access to other company's models at a discount, similar to companies reselling cellular service.

    12. briansm ◴[] No.44048265{3}[source]
    The AI hardware requirements are currently insane; the models are doing with Megawatts of power and warehouses full of hardware what an average Joe does in 20 Watts and a 'bowl of noodles'.
    replies(1): >>44049423 #
    13. disgruntledphd2 ◴[] No.44049409[source]
    Google have a much, much, much better cost basis for this stuff though, as they have their own chips.
    14. KineticLensman ◴[] No.44049423{4}[source]
    They handle many more requests per second than an average Joe
    replies(1): >>44049605 #
    15. otabdeveloper4 ◴[] No.44049599{3}[source]
    Skill issue.

    You can easily get x10 optimizations with some obvious changes.

    You can run a small 100 person enterprise on a single 24 gb GPU right now. (And this is before economies of scale have started optimizing hardware.)

    OpenAI needs the keep the illusion of an anthropomorphic AGI chatbot going to keep the invenstments flowing. This is expensive and stupid.

    If you just want to solve the actual typical business problems ("check this picture for offensive content" and similar stuff) you don't need all that smoke and mirrors.

    16. otabdeveloper4 ◴[] No.44049605{5}[source]
    Not really. They have large contexts and lack of proper caching for "reasons".