Most active commenters
  • guappa(8)
  • CamperBob2(7)
  • (6)
  • Kiro(4)
  • kergonath(4)
  • switch007(3)
  • ben_w(3)

←back to thread

321 points jhunter1016 | 107 comments | | HN request time: 1.386s | source | bottom
1. Roark66 ◴[] No.41878594[source]
>OpenAI plans to loose $5 billion this year

Let that sink in for anyone that has incorporated Chatgpt in their work routines to the point their normal skills start to atrophy. Imagine in 2 years time OpenAI goes bust and MS gets all the IP. Now you can't really do your work without ChatGPT, but it cost has been brought up to how much it really costs to run. Maybe $2k per month per person? And you get about 1h of use per day for the money too...

I've been saying for ages, being a luditite and abstaining from using AI is not the answer (no one is tiling the fields with oxen anymore either). But it is crucial to at the very least retain 50% of capability hosted models like Chatgpt offer locally.

replies(20): >>41878631 #>>41878635 #>>41878683 #>>41878699 #>>41878717 #>>41878719 #>>41878725 #>>41878727 #>>41878813 #>>41878824 #>>41878984 #>>41880860 #>>41880934 #>>41881556 #>>41881938 #>>41882059 #>>41883046 #>>41883088 #>>41883171 #>>41885425 #
2. switch007 ◴[] No.41878631[source]
$2k is way way cheaper than a junior developer which, if I had to guess their thinking, is who the Thought Leaders think it'll replace.

Our Thought Leaders think like that at least. They also pretty much told us to use AI or get fired

replies(5): >>41879067 #>>41880494 #>>41880811 #>>41882314 #>>41901613 #
3. hggigg ◴[] No.41878635[source]
I think this is the wrong way to think about it.

It's more important to find a problem and see if this is a fit for the solution, not throw the technology at everything and see if it sticks.

I have had no needs where it's an appropriate solution myself. In some areas it represents a net risk.

4. hmottestad ◴[] No.41878683[source]
Cost tends to go down with time as compute becomes cheaper. And as long as there is competition in the AI space it's likely that other companies would step in and fill the void created by OpenAI going belly up.
replies(2): >>41878721 #>>41878929 #
5. jdmoreira ◴[] No.41878699[source]
Skills that will atrophy? People learnt those skills the hard way the first time around, do you really think they can't be sharpened again?

This perspective makes zero sense.

What makes sense is to extract as much value as possible as soon as possible and for as long as possible.

6. chrsw ◴[] No.41878717[source]
What if your competition is willing to give up autonomy to companies like Microsoft/Open AI a bet to race head of you and it comes off?
replies(1): >>41882381 #
7. sebzim4500 ◴[] No.41878719[source]
The marginal cost of inference per token is lower than what OpenAI charges you (IIRC about 2x cheaper), they make a loss because of the enormous costs of R&D and training new models.
replies(4): >>41878823 #>>41878875 #>>41878927 #>>41879029 #
8. infecto ◴[] No.41878721[source]
I tend to think along the same lines. If they were the only player in town it would be different. I am also not convinced $5billion is that big of a deal for them, would be interesting to see their modeling but it would be a lot more suspect if they were raising money and increasing the price of the product. Also curious how much of that spend is R&D compared to running the system.
9. bbarnett ◴[] No.41878725[source]
The cost of current compute for current versions pf chatgpt will have dropped through the floor in 2 years, due to processing improvements and on die improvements to silicon.

Power requirements will drop too.

As well, as people adopt, the output of training costs will be averaged over an ever increasing market of licensing sales.

Looking at the cost today, and sales today in a massively, rapidly expanding market, is not how to assess costs tomorrow.

I will say one thing, those that need gpt to code will be the first to go. Becoming a click-click, just passing on chatgpt output, will relegate those people to minimum wage.

We already have some of this sort, those that cannot write a loop in their primary coding language without stackoverflow, or those that need an IDE to fill in correct function usage.

Those who code in vi, while reading manpages need not worry.

replies(3): >>41878943 #>>41878973 #>>41879525 #
10. singularity2001 ◴[] No.41878727[source]
people kept whining about Amazon losing money and called me stupid for buying their stock...
replies(5): >>41878830 #>>41878877 #>>41881679 #>>41882287 #>>41882942 #
11. bmitc ◴[] No.41878813[source]
Fine with me. I've even considered turning off Copilot completely because I use it less and less.
replies(1): >>41887468 #
12. InkCanon ◴[] No.41878824[source]
I would just switch to Claude of Mistral like I already do. I really feel little difference between them
replies(1): >>41880782 #
13. diggan ◴[] No.41878823[source]
Did OpenAI publish concrete numbers regarding this, or where are you getting this data from?
replies(1): >>41881067 #
14. bmitc ◴[] No.41878830[source]
Why does everyone always like to compare every company to Amazon? Those companies are never like Amazon, which is one of the most entrenched companies ever.
replies(1): >>41878888 #
15. ignoramous ◴[] No.41878875[source]
> The marginal cost of inference per token is lower than what OpenAI charges you

Unlike most Gen AI shops, OpenAI also incurs a heavy cost for traning base models gunning for SoTA, which involves drawing power from a literal nuclear reactor inside data centers.

replies(2): >>41878936 #>>41878996 #
16. ben_w ◴[] No.41878877[source]
As I recall, while Amazon was doing this, there was no comparable competition from other vendors that properly understood the internet as a marketplace? Closest was eBay?

There is real competition now that plenty of big box stores' websites also list things you won't see in the stores themselves*, but then also Amazon is also making a profit now.

I think the current situation with LLMs is a dollar auction, where everyone is incentivised to pay increasing costs to outbid the others, even though this has gone from "maximise reward" to "minimise losses": https://en.wikipedia.org/wiki/Dollar_auction

* One of my local supermarkets in Germany sells 4-room "garden sheds" that are substantially larger than the apartment I own in the UK: https://www.kaufland.de/product/396861369/

17. ben_w ◴[] No.41878888{3}[source]
While I agree the comparison is not going to provide useful insights, in fairness to them Amazon wasn't entrenched at the time they were making huge losses each year.
18. tempusalaria ◴[] No.41878927[source]
It’s not clear this is true because reported numbers don’t disaggregate paid subscription revenue (certainly massively GP positive) vs free usage (certainly negative) vs API revenue (probably GP negative).

Most of their revenue is the subscription stuff, which makes it highly likely they lose money per token on the api (not surprising as they are are in price war with Google et al)

If you have an enterprise ChatGPT sub you have to consume around 5mln tokens a month to match the cost of using the api on GPT4o. At 100 words per minute that’s 35 days on continuous typing which shows how ridiculous the costs of api vs subscription are.

replies(1): >>41881150 #
19. ToucanLoucan ◴[] No.41878929[source]
> Cost tends to go down with time as compute becomes cheaper.

This is generally true but seems to be, if anything, inverted for AI. These models cost billions to train in compute, and OpenAI thus far has needed to put out a brand new one roughly annually in order to stay relevant. This would be akin to Apple putting out a new iPhone that costed billions to engineer year over year, but was giving the things away for free on the corner and only asking for money for the versions with more storage and what have you.

The vast majority of AI adjacent companies too are just repackaging OpenAI's LLMs, the exceptions being ones like Meta, which certainly has a more solid basis what with being tied to an incredibly profitable product in Facebook, but also... it's Meta and I'm sure as shit not using their AI for anything, because it's Meta.

I did some back of napkin math in a comment a ways back and landed on that in order to break even merely on training costs, not including the rest of the expenditure of the company, they would need to charge all of their current subscribers $150 per month, up from... I think the most expensive right now is about $20? So nearly an 8 fold price increase, with no attrition, to again break even. And I'm guessing all these investors they've had are not interested in a 0 sum.

replies(2): >>41881018 #>>41881375 #
20. fransje26 ◴[] No.41878936{3}[source]
> from a literal nuclear reactor inside data centers.

No.

replies(1): >>41880823 #
21. ben_w ◴[] No.41878943[source]
> We already have some of this sort, those that cannot write a loop in their primary coding language without stackoverflow, or those that need an IDE to fill in correct function usage.

> Those who code in vi, while reading manpages need not worry

I think that's the wrong dichotomy: LLMs are fine at turning man pages into working code. In huge codebases, LLMs do indeed lose track and make stuff up… but that's also where IDEs giving correct function usage is really useful for humans.

The way I think we're going to change, is that "LGTM" will no longer be sufficient depth of code review: LLMs can attend to more than we can, but they can't attend as well as we can.

And, of course, we will be getting a lot of LLM-generated code, and having to make sure that it really does what we want, without surprise side-effects.

22. nuancebydefault ◴[] No.41878973[source]
> Those who code in vi, while reading manpages need not worry.

That sounds silly at first read, but there are indeed people who are so stubborn to still use numbered zip files on a usb flash drive in stead of source control systems, or prefer to use their own scheduler over an RTOS.

They will survive, they fill a niche, but I would not say they can do full stack development or be even easy to collaborate with.

23. whywhywhywhy ◴[] No.41878984[source]
I used to be concerned with this back when GPT4 originally came out and was way more impressive than the current version and OpenAI was the only game in town.

But Nowadays GPT has been quantized and cost-optimized to hell that it's no longer as useful as it was and with Claude or Gemini or whatever it's no longer noticeably better than any of them so it doesn't really matter what happens with their pricing.

replies(1): >>41879036 #
24. candiddevmike ◴[] No.41878996{3}[source]
> literal nuclear reactor inside data centers

This is fascinating to think about. Wonder what kind of shielding/environmental controls/all other kinds of changes you'd need for this to actually work. Would rack-sized SMR be contained enough not to impact anything? Would datacenter operators/workers need to follow NRC guidance?

replies(3): >>41880937 #>>41881030 #>>41882203 #
25. ◴[] No.41879029[source]
26. edg5000 ◴[] No.41879036[source]
Are you saying they reduced the quality of the model in order to save compute? Would it make sense for them to offer a premium version of the model at at a very high price? At least offer it to those willing to pay?

It would not make sense to reduce output quality only to save on compute at inference, why not offer a premium (and perhaps perhaps slower) tier?

Unless the cost is at training time, maybe it would not be cost-effective for them to keep a model like that up to date.

As you can tell I am a bit uninformed on the topic.

replies(1): >>41880783 #
27. ilrwbwrkhv ◴[] No.41879067[source]
Which thought leader is telling you to use AI or get fired?
replies(1): >>41879098 #
28. switch007 ◴[] No.41879098{3}[source]
My CTO (C level is automatically a Thought Leader)
29. whoisthemachine ◴[] No.41879525[source]
You had me until vi.
30. CamperBob2 ◴[] No.41880494[source]
It's premature to think you can replace a junior developer with current technology, but it seems fairly obvious that it'll be possible within 5-10 years at most. We're well past the proof-of-concept stage IMO, based on extensive (and growing) personal experience with ML-authored code. Anyone who argues that the traditional junior-developer role isn't about to change drastically is whistling past the graveyard.

Your C-suite execs are paid to skate where that particular puck is going. If they didn't, people would complain about their unhealthy fixation on the next quarter's revenue.

Of course, if the junior-developer role is on the chopping block, then more experienced developers will be next. Finally, the so-called "thought leaders" will find themselves outcompeted by AI. The ability to process very large amounts of data in real time, leveraging it to draw useful conclusions and make profitable predictions based on ridiculously-large historical models, is, again, already past the proof-of-concept stage.

replies(2): >>41881005 #>>41881490 #
31. mprev ◴[] No.41880782[source]
I like how your typo makes it sound like a medieval sage.
replies(1): >>41881379 #
32. bt1a ◴[] No.41880783{3}[source]
Yeah, as someone who had access to gpt-4 early in 2023, the endpoint used to take over a minute to respond and the quality of the responses was mindblowing. Simply too expensive to serve at scale, not to mention the silicon constraints that are even more prohibitive when the organization needs to lock up a lot of their compute for training The Next Big Model. Thats a lot of compute that cant be on standby for serving inference
33. srockets ◴[] No.41880811[source]
I found those tools to resemble an intern: they can do some tasks pretty well, when explained just right, but others you'd spend more time guiding than it would have taken you to do it yourself.

And rarely can you or the model/intern can tell ahead of time which tasks are in each of those categories.

The difference is, interns grow and become useful in months: the current rate of improvements in those tools isn't even close to that of most interns.

replies(1): >>41881119 #
34. Tostino ◴[] No.41880823{4}[source]
Their username is fitting though.
replies(1): >>41881514 #
35. marcosdumay ◴[] No.41880860[source]
> being a luditite and abstaining from using AI is not the answer

Hum... The judge is still out on that one, but the evidence is piling up into the "yes, not using it is what works best" here. Personally, my experience is strongly negative, and I've seen other people get very negative results from it too.

Maybe it will improve so much that at some point people actually get positive value from it. My best guess is that we are not there yet.

replies(3): >>41881661 #>>41882152 #>>41883280 #
36. ◴[] No.41880934[source]
37. talldayo ◴[] No.41880937{4}[source]
I think the simple answer is that it doesn't make sense. Nuclear power plants generate a byproduct that inherently limits the performance of computers; heat. Having either a cooling system, reactor or turbine located inside a datacenter is immediately rendered pointless because you end up managing two competing thermal systems at once. There is no reason to localize a reactor inside a datacenter when you could locate it elsewhere and pipe the generated electricity into it via preexisting high voltage lines.
replies(1): >>41882190 #
38. actsasbuffoon ◴[] No.41881005{3}[source]
Unless I’ve missed some major development then I have to strenuously disagree. AI is primarily good at writing isolated scripts that are no more than a few pages long.

99% of the work I do happens in a large codebase, far bigger than anything that you can feed into an AI. Tickets come in that say something like, “Users should be able to select multiple receipts to associate with their reports so long as they have the management role.”

That ticket will involve digging through a whole bunch of files to figure out what needs to be done. The resolution will ultimately involve changes to multiple models, the database schema, a few controllers, a bunch of React components, and even a few changes in a micro service that’s not inside this repo. Then the AI is going to fail over and over again because it’s not familiar with the APIs for our internal libraries and tools, etc.

AI is useful, but I don’t feel like we’re any closer to replacing software developers now than we were a few years ago. All of the same showstoppers remain.

replies(3): >>41881127 #>>41881522 #>>41882188 #
39. Mistletoe ◴[] No.41881018{3}[source]
The closest analog seems to be bitcoin mining, which continually increases difficulty. And if you've ever researched how many bitcoin miners go under...
replies(1): >>41881146 #
40. ◴[] No.41881030{4}[source]
41. lukeschlather ◴[] No.41881067{3}[source]
https://news.ycombinator.com/item?id=41833287

This says 506 tokens/second for Llama 405B on a machine with 8x H200s which you can rent for $4/GPU so probably $40/hour for a server with enough GPUs. And so it can do ~1.8M tokens per hour. OpenAI charges $10/1M output tokens for GPT4o. (input tokens and cached tokens are cheaper, but this is just ballpark estimates.) So if it were 405B it might cost $20/1M output tokens.

Now, OpenAI is a little vague, but they have implied that GPT4o is actually only 60B-80B parameters. So they're probably selling it with a reasonable profit margin assuming it can do $5/1M output tokens at approximately 100B parameters.

And even if they were selling it at cost, I wouldn't be worried because a couple years from now Nvidia will release H300s that are at least 30% more efficient and that will cause a profit margin to materialize without raising prices. So if I have a use case that works with today's models, I will be able to rent the same thing a year or two from now for roughly the same price.

42. luckydata ◴[] No.41881119{3}[source]
I have a slightly different view. IMHO LLMs are excellent rubber ducks or pair programmers. The rate at which I can try ideas and get them back is much higher than what I would be doing by myself. It gets me unstuck in places where I might have spent the best part of a day in the past.
replies(1): >>41881162 #
43. luckydata ◴[] No.41881127{4}[source]
Google's LLM can ingest humongous contexts. Check it out.
44. lukeschlather ◴[] No.41881146{4}[source]
It's nothing like bitcoin mining. Bitcoin mining is intentionally designed so that it gets harder as people use it more, no matter what.

With LLMs, if you have a use case which can run on an H100 or whatever and costs $4/hour, and the LLM has acceptable performance, it's going to be cheaper in a couple years.

Now, all these companies are improving their models but they're doing that in search of magical new applications the $4/hour model I'm using today can't do. If the $4/hour model works today, you don't have to worry about the cost going up. It will work at the same price or cheaper in the future.

replies(1): >>41881381 #
45. seizethecheese ◴[] No.41881150{3}[source]
In summary, the original point of this thread is wrong. There’s essentially no future where these tools disappear or become unavailable at reasonable cost for consumers. Much more likely is they get way better.
replies(2): >>41883125 #>>41884310 #
46. srockets ◴[] No.41881162{4}[source]
My experience differs: if at all, they get me unstuck by trying to shove bad ideas, which allows me to realize "oh, that's bad, let's not do that". But it's also extremely frustrating, because a stream of bad ideas from a human has some hope they'll learn, but here I know I'll get the same BS, only with an annoying and inhumane apology boilerplate.
replies(1): >>41882159 #
47. authorfly ◴[] No.41881375{3}[source]
This reasoning about the subscription price etc is undermined by the actual prices OpenAI are charging -

The price of a model capable of 4o mini level performance used to be 100x higher.

Yes, literally 100x. The original "davinci model" (and I paid $5 figures for using it throughout 2021-2022) cost $0.06/1k tokens.

So it's not inverting in running costs (which are the thing that will kill a company). Struggling with training costs (which is where you correctly identify OpenAI is spending) will stop growth perhaps, but won't kill you if you have to pull the plug.

I suspect subscription prices are based on market capture and perceived customer value, plus plans for training, not running costs.

replies(1): >>41887216 #
48. card_zero ◴[] No.41881379{3}[source]
Let me consult my tellingbone.
49. Mistletoe ◴[] No.41881381{5}[source]
But OpenAI has to keep releasing new ever-increasing models to justify it all. There is a reason they are talking about nuclear reactors and Sam needing 7 trillion dollars.

One other difference from Bitcoin is that the price of Bitcoin rises to make it all worth it, but we have the opposite expectation with AI where users will eventually need to pay much more than now to use it, but people only use it now because it is free or heavily subsidized. I agree that current models are pretty good and the price of those may go down with time but that should be even more concerning to OpenAI.

replies(1): >>41882295 #
50. l33t7332273 ◴[] No.41881490{3}[source]
You would think thought leaders would be the first to be replaced by AI.

> The ability to process very large amounts of data in real time, leveraging it to draw useful conclusions and make profitable predictions based on ridiculously-large historical models, is, again, already past the proof-of-concept stage.

[citation needed]

replies(1): >>41882645 #
51. ignoramous ◴[] No.41881514{5}[source]
Bully.

I wrote "inside" to mean that those mini reactors (300MW+) are meant to be used solely for the DCs.

(noun: https://www.collinsdictionary.com/dictionary/english-thesaur... / https://en.wikipedia.org/wiki/Heterosemy)

Replace it with nearby if that's makes you feel good about anyone's username.

replies(1): >>41882539 #
52. CamperBob2 ◴[] No.41881522{4}[source]
All of the code you mention implements business logic, and you're right, it's probably not going to be practical to delegate maintenance of existing code to an ML model. What will happen, probably sooner than you think, is that that code will go away and be replaced by script(s) that describe the business logic in something close to declarative English. The AI model will then generate the code that implements the business logic, along with the necessary tests.

So when maintenance is required, it will be done by adding phrases like "Users should be able to select multiple receipts" to the existing script, and re-running it to regenerate the code from scratch.

Don't confuse the practical limitations of current models with conceptual ones. The latter exist, certainly, but they will either be overcome or worked around. People are just not as good at writing code as machines are, just as they are not as good at playing strategy games. The models will continue to improve, but we will not.

replies(1): >>41881899 #
53. righthand ◴[] No.41881556[source]
Being a luddite has it’s advantages as you won’t succumb to the ills of society trying to push you there. To believe that it’s inevitable LLMs will be required to work is silly in my opinion. As these corps eat more and more good will of the content on the internet for only their gain, people will and have already started defecting from it. Many of my coworkers have shut off CoPilot, though still occasionally use ChatGPT. But since the power really only is adding randomization to established working document templates, the gain is only of a short amount of working time.

There is also the active and passive efforts to poison the well. As LLMs are used to output more content and displace people, the LLMs will be trained on the limited regurgitation available to the public (passive). Then there’s the people intentionally creating bad content to be ingested. It really is a lose for big service llm companies as the local models become more and more good enough (active).

54. bigstrat2003 ◴[] No.41881661[source]
Yeah, I agree. It's not "being a Luddite" to take a look and conclude that the tool doesn't actually deliver the value it claims to. When AI can actually reliably do the things its proponents say it can do, I'll use it. But as of today it can't, and I have no use for tools that only work some of the time.
55. bigstrat2003 ◴[] No.41881679[source]
And for every Amazon, there are a hundred other companies that went out of business because they never could figure out how to turn a profit. You made a bet which paid off and that's cool, but that doesn't mean the people telling you it was a bad bet were wrong.
56. prewett ◴[] No.41881899{5}[source]
The problem is, the feature is never actually "users should be able to select multiple receipts". It's "users should be able to select multiple receipts, but not receipts for which they only have read access and not write access, and not when editing a receipt, and should persist when navigating between the paginated data but not persist if the user goes to a different 'page' within the webapp. The selection should be a thick border around the receipt, using the webapp selection color and the selection border thickness, except when using the low-bandwidth interface, in which case it should be a checkbox on the left (or on the right if the user is using a RTL language). Selection should adhere to standard semantics: shift selects all items from the last selection, ctrl/cmd toggles selection of that item, and clicking creates a new, one-receipt selection. ..." By the time you get all that, it's clearer in code.

I will observe that there have been at least three natural-language attempts in the past, none of which succeeded in being "just write it down". COBOL is just as code-y as any other programming language. SQL is similar, although I know a fair amount of non-programmers who can write SQL (but then, back in the day my Mom taught be about autoexec.bat, and she could care less about programming). Anyway, SQL is definitely not just adding phrases and it just works. Finally, Donald Knuth's WEB is a mixture, more like a software blog entry, where you put the pieces of the software inamongst the explanatory writeup. It has caught on even less, unless you count software blogs.

replies(1): >>41882585 #
57. zuminator ◴[] No.41881938[source]
Where are you getting $2k/person/month? ChatGPT allegedly has on the order of 100 million users. Divide that by $5b and you get a $50 deficit per person per year. Meaning they could raise their prices by less than four and a half dollars per user to break even.

Even if they were to only gouge the current ~11 million paying subscribers, that's around $40/person/month over current fees to break even. Not chump change, but nowhere close to $2k/person/month.

replies(5): >>41882007 #>>41882136 #>>41882244 #>>41882685 #>>41884430 #
58. ants_everywhere ◴[] No.41882007[source]
I think the question is more how much the market will bear in a world where MS owns the OpenAI IP and it's only available as an Azure service. That's a different question from what OpenAI needs to break even this year.
59. X6S1x6Okd1st ◴[] No.41882059[source]
Chatgpt doesn't have much of a moat. Claude is comparable for coding tasks and llama isn't far behind.

No biz collapse will remove llama from the world, so if you're worried about tools disappearing then just only use tools that can't disappear

replies(1): >>41883441 #
60. Kiro ◴[] No.41882152[source]
It's not either or. In my specific situation Cursor is such a productivity booster that I can't imagine going back. It's not a theoretical question.
replies(1): >>41884288 #
61. Kiro ◴[] No.41882159{5}[source]
Not my experience at all. What kind of code are you using it for?
replies(1): >>41901618 #
62. Kiro ◴[] No.41882188{4}[source]
Cursor has no problem making complicated PRs spanning multiple files and modules in my legacy spaghetti code. I wouldn't be surprised if it could replace most programmers already.
replies(2): >>41884371 #>>41901650 #
63. kergonath ◴[] No.41882190{5}[source]
> Nuclear power plants generate a byproduct that inherently limits the performance of computers; heat.

The reactor does not need to be in the datacenter. It can be a couple hundreds meters away, bog-standard cables would be perfectly able to move the electrons. The cables being 20m or 200m long does not matter much.

You’re right though, putting them in the same building as a datacenter still makes no sense.

64. kergonath ◴[] No.41882203{4}[source]
It makes zero sense to build them in datacenters and I don’t know of any safety authority that would allow deploying reactors without serious protection measures that would at the very least impose a different, dedicated building.

At some point it does make sense to have a small reactor powering a local datacenter or two, however. Licensing would still be not trivial.

65. alpha_squared ◴[] No.41882244[source]
What you're suggesting is the basic startup math for any typical SaaS business. The problem is OpenAI and the overall AI space is raising funding on the promise of being much more than a SaaS. If we ignore all the absurd promises ("it'll solve all of physics"), the promise to investors is distilled down to this being the dawn of a new era of computing and investors have responded by pouring in hundreds of billions of dollars into the space. At that level of investment, I sure hope the plan is to be more than a break-even SaaS.
66. insane_dreamer ◴[] No.41882287[source]
Amazon was losing money because it was building the moat

It's not clear that OpenAI has any moat to build

67. kergonath ◴[] No.41882295{6}[source]
> But OpenAI has to keep releasing new ever-increasing models to justify it all.

There seems to be some renewed interest for smaller, possibly better-designed LLMs. I don’t know if this really lowers training costs, but it makes inference cheaper. I suspect at some point we’ll have clusters of smaller models, possibly activated when needed like in MoE LLMs, rather than ever-increasing humongous models with 3T parameters.

68. kergonath ◴[] No.41882314[source]
> Our Thought Leaders think like that at least. They also pretty much told us to use AI or get fired

Ours told us not to use LLMs because they are worried about leaking IP and confidential data.

replies(1): >>41887193 #
69. achierius ◴[] No.41882381[source]
It's a devil's bargain, and not just in terms of the _individual_ payoffs that OpenAI employees/executives might receive. There's a reason why Google/Microsoft/Amazon/... ultimately failed to take the lead in GenAI, despite every conceivable advantage (researchers, infrastructure, compute, established vendor relationships, ...). The "autonomy" of a startup is what allows it to be nimble; the more Microsoft is able to tell OpenAI what to do, the more I expect them to act like DeepMind, a research group set apart from their parent company but still beholden to it.
70. Tostino ◴[] No.41882539{6}[source]
You are right, that wasn't a charitable reading of your comment. Should have kept it to myself.

Sorry for being rude.

71. CamperBob2 ◴[] No.41882585{6}[source]
I will observe that there have been at least three natural-language attempts in the past, none of which succeeded in being "just write it down". COBOL...

I think we're done here.

replies(1): >>41901643 #
72. CamperBob2 ◴[] No.41882645{4}[source]
If you can drag a 9-dan grandmaster up and down the Go ban, you can write a computer program or run a company.
73. layer8 ◴[] No.41882685[source]
> ChatGPT allegedly has on the order of 100 million users.

That’s users, not subscribers. Apparently they have around 10 million ChatGPT Plus subscribers plus 1 million business-tier users: https://www.theinformation.com/articles/openai-coo-says-chat...

To break even, that means that ChatGPT Plus would have to cost around $50 per month, if not more because less people will be willing to pay that.

replies(1): >>41883064 #
74. empath75 ◴[] No.41882942[source]
Depending on when you bought it, it was a pretty risky play until AWS came out and got traction. Their retail business _still_ doesn't make money.
75. ◴[] No.41883046[source]
76. zuminator ◴[] No.41883064{3}[source]
You only read the first half of my comment and immediately went on the attack. Read the whole thing.
replies(2): >>41884682 #>>41884737 #
77. Spivak ◴[] No.41883088[source]
Take the "millennial subsidy" while the money font still floweth. If it gets cut off eventually so be it.
78. jazzyjackson ◴[] No.41883125{4}[source]
I mean use to be I could get an Uber across Manhattan for $5

From my view chatbots are still in the "selling dollars for 90 cents" category of product, of course it sells like discounted hotcakes...

replies(2): >>41883329 #>>41887013 #
79. Taylor_OD ◴[] No.41883171[source]
Is anyone using it to the point where their skills start to atrophy? I use it fairly often but mostly for boilerplate code or simple tasks. The stuff that has specific syntax that I have to look up anyway.

That feels like saying that using spell check or autocomplete will make one's spelling abilities atrophy.

80. int_19h ◴[] No.41883280[source]
Machine translation alone is a huge positive value. What GPT can do in this area is vastly better than anything before it.
81. seizethecheese ◴[] No.41883329{5}[source]
… this is conflating two things, marginal and average cost/revenue. They are very very different.
82. mlnj ◴[] No.41883441[source]
And Zuckerberg has vowed to pump billions more into developing and releasing more Llama. I believe "Altman declaring AGI is almost here" was peak OpenAI and now I will just have some popcorn ready.
83. epolanski ◴[] No.41884288{3}[source]
+1, people that are giving up on tools like Cursor are just less productive, that's not theoretical, that's a fact.
84. tempusalaria ◴[] No.41884310{4}[source]
Definitely they will.

OpenAI’s potential issue is that if Google offers tokens at a 10% gross margin, OpenAI won’t be able to offer api tokens at a positive gross margin at all. Their only chance really is building a big subscription business. No way they can compete with a hyperscaler on api cost long run

85. esafak ◴[] No.41884371{5}[source]
How well does it propagate the effects of the files it has modified?
86. danpalmer ◴[] No.41884430[source]
> around $40/person/month over current fees

So 3x the fees, if they're currently at $20/user/month. That's a big jump, and puts the tool in a different spending category as it goes from just another subscription to more like another utility bill in users' minds. The amount of value you're getting out of it is hard to quantify for most people, so I imagine they'd lose customers.

Also there's a clear market trend, and that is that AI services are $20 for the good version, or free. $60 is not a great price to compete in that market at unless you're clearly better.

87. ◴[] No.41884682{4}[source]
88. layer8 ◴[] No.41884737{4}[source]
My apologies. The “break even” calculation in the first paragraph seemed so absurd to me (as much as the $2K per month) that I must have skipped over the second paragraph. Nevertheless, with regard to the second paragraph, to fill the $5B gap it would take over $30 on top (hence $50), which would be quite a high price for many users.
89. ◴[] No.41885425[source]
90. sebzim4500 ◴[] No.41887013{5}[source]
The difference is that Uber was making a loss on those journeys whereas OpenAI aren't making a loss on chatgpt subscriptions.

They make a loss overall because they spend a ton on R&D.

91. switch007 ◴[] No.41887193{3}[source]
I think Enterprise plans mostly solve this. And Copilot is quite aggressive with blocking public code (haven't looked in to what that really means and what we've configured, I just get the error often)
replies(1): >>41901659 #
92. ToucanLoucan ◴[] No.41887216{4}[source]
> So it's not inverting in running costs (which are the thing that will kill a company). Struggling with training costs (which is where you correctly identify OpenAI is spending) will stop growth perhaps, but won't kill you if you have to pull the plug.

I don’t think it’s that cut and dried though. Many users run into similar issues as other issues with things like reasoning (which is (allegedly) being addressed) and hallucinations (less so) both of which in turn become core reasons for subsequent better versions of the tech. Whether the subsequent versions deliver on those promises is irrelevant (though they often don’t) to that, at least IMHO, being a core reason to “stay on board” with the product. I have to think if they announced tomorrow they couldn’t afford to train the next one that there would be a pretty substantial attrition of paying users, which then makes it even harder to resume training in the future, no?

93. simonswords82 ◴[] No.41887468[source]
Copilot really is garbage and I just don’t understand why. Is Microsoft not using OpenAIs latest models?
94. guappa ◴[] No.41901613[source]
A junior developer gets better every day (hopefully), an AI is a blank slate every time more or less.
95. guappa ◴[] No.41901618{6}[source]
Not everyone does CRUD applications.
96. guappa ◴[] No.41901643{7}[source]
Please ping me in 5 years to reassess. I'm ready to bet 1 month of my salary that human software developers will still exist then.
replies(2): >>41901834 #>>41909223 #
97. guappa ◴[] No.41901650{5}[source]
Not everybody's job is necessarily as simple as yours.
replies(1): >>41902183 #
98. guappa ◴[] No.41901659{4}[source]
Ah yes, because microsoft never had a leak right?

I can absolutely not go on the internet and download the source code for windows, because microsoft's security is impeccable.

99. fragmede ◴[] No.41901834{8}[source]
1 months salary now, or then? You'll be 5 years further into your career, so it'll hopefully be higher, but also, the industry is changing. Even if ChatGPT-5 never comes out, it's already making waves on developer productivity where there's enough training data. So in five years will it still be a highly paid $300k/yr at a FAANG position, or will it pay more like being the line cook at a local diner. Or maybe it'll follow the pay rate for musicians - trumpet players before cheap records came out made a decent living. Since then, the rise of records, and then radio and CDs and now the Internet and Spotify means that your local bar doesn't need to have a person come over to play music in order to have music. or visuals for that matter. The sports bar wouldn't exist without television. So maybe programming will be like being a musician in five years, with some making Taylor Swift money, and others busking at subway entrances. I'm hoping it'll still be a highly paid position, but it would be foolish of me not to see how easy it is to make an app by sitting down with Claude and giving it some high level directives and iterating.
replies(1): >>41903301 #
100. Kiro ◴[] No.41902183{6}[source]
It's anything but simple; good try though.

From your comments, it's clear you've already made up your mind that it can't possibly be true and you're just trying to find rationalisations to support your narrative. I don't understand why you feel the need to be rude about it though.

replies(1): >>41903294 #
101. guappa ◴[] No.41903294{7}[source]
My comments stem from real world experience, since I do exist outside of my comments (although I understand it can be hard to imagine).

Every single person who claimed AI was a great help to their job in writing software that I've encountered was either inexperienced (regardless of age) or working solely on very simple tasks.

replies(1): >>41906000 #
102. guappa ◴[] No.41903301{9}[source]
You have to make a PRODUCT, not an hello world app :)
replies(1): >>41906400 #
103. CamperBob2 ◴[] No.41906000{8}[source]
The fact that it's even remotely useful at all at such an early, primitive stage of development should give you pause.

When it comes to stuff like this, the state of the art at any given time is irrelevant. Only the first couple of time derivatives matter. How much room for growth do you have over the next 5-10 years?

replies(1): >>41906310 #
104. guappa ◴[] No.41906310{9}[source]
Early primitive stage? Markov chains generators have been around for at least 30 years…

As I said, I'm taking bets…

replies(1): >>41909230 #
105. fragmede ◴[] No.41906400{10}[source]
Using an advanced programming technique called modularization, where you put the code into multiple different files, you may find it possible to get around the LLMs problem of limited context window length and find success building more than a trivial todo app. Of course you'd have to try this for yourself instead of parroting what you read on the Internet, so your mileage may vary. =p
106. CamperBob2 ◴[] No.41909223{8}[source]
Well, of course they will. Human farmers still exist, too, but their day-to-day activities don't resemble their ancestors jobs in the slightest.
107. CamperBob2 ◴[] No.41909230{10}[source]
Markov chains generators have been around for at least 30 years…

Show me a Markov generator that can explain how it works.