Whether you find that you get $250 worth out of that subscription is going to be the big question
Whether you find that you get $250 worth out of that subscription is going to be the big question
It costs the provider the same whether the user is asking for advice on changing a recipe or building a comprehensive project plan for a major software product - but the latter provides much more value than the former.
How can you extract an optimal price from the high-value use cases without making it prohibitively expensive for the low-value ones?
Worse, the "low-value" use cases likely influence public perception a great deal. If you drive the general public off your platform in an attempt to extract value from the professionals, your platform may never grow to the point that the professionals hear about it in the first place.
So far I have not been convinced that any particular platform is more than 3 months ahead of the competition.
Platforms want Planet Fitness type subscriptions, recurring revenue streams where most users rarely use the product.
That works fine at the $20/month price point but it won't work at $200+ per month because the instant I stop using an expensive plan, I cancel.
And if I want to use $1000 worth of the expensive plan I get stopped by rate limits.
Maybe the ultra-level would generate more revenue with bigger market share (but lower margin) with a pay-per-token plan.
Yeah that's why OpenAI build an data center imo, the moat is on hardware
software ??? even small chinnese firm would able to copy that, but 2 million gpu ???? its hard to copy that
Company 1 gets a bucket of investment, makes a model, goes belly up. Company 2 buys Company 1's model in a fire sale.
Company 3 uses some open source model that's basically as good as any other and just makes the prettiest wrapper.
Company 4 resells access to other company's models at a discount, similar to companies reselling cellular service.
You can easily get x10 optimizations with some obvious changes.
You can run a small 100 person enterprise on a single 24 gb GPU right now. (And this is before economies of scale have started optimizing hardware.)
OpenAI needs the keep the illusion of an anthropomorphic AGI chatbot going to keep the invenstments flowing. This is expensive and stupid.
If you just want to solve the actual typical business problems ("check this picture for offensive content" and similar stuff) you don't need all that smoke and mirrors.