←back to thread

600 points antirez | 1 comments | | HN request time: 0.001s | source
Show context
dakiol ◴[] No.44625484[source]
> Gemini 2.5 PRO | Claude Opus 4

Whether it's vibe coding, agentic coding, or copy pasting from the web interface to your editor, it's still sad to see the normalization of private (i.e., paid) LLM models. I like the progress that LLMs introduce and I see them as a powerful tool, but I cannot understand how programmers (whether complete nobodies or popular figures) dont mind adding a strong dependency on a third party in order to keep programming. Programming used to be (and still is, to a large extent) an activity that can be done with open and free tools. I am afraid that in a few years, that will no longer be possible (as in most programmers will be so tied to a paid LLM, that not using them would be like not using an IDE or vim nowadays), since everyone is using private LLMs. The excuse "but you earn six figures, what' $200/month to you?" doesn't really capture the issue here.

replies(46): >>44625521 #>>44625545 #>>44625564 #>>44625827 #>>44625858 #>>44625864 #>>44625902 #>>44625949 #>>44626014 #>>44626067 #>>44626198 #>>44626312 #>>44626378 #>>44626479 #>>44626511 #>>44626543 #>>44626556 #>>44626981 #>>44627197 #>>44627415 #>>44627574 #>>44627684 #>>44627879 #>>44628044 #>>44628982 #>>44629019 #>>44629132 #>>44629916 #>>44630173 #>>44630178 #>>44630270 #>>44630351 #>>44630576 #>>44630808 #>>44630939 #>>44631290 #>>44632110 #>>44632489 #>>44632790 #>>44632809 #>>44633267 #>>44633559 #>>44633756 #>>44634841 #>>44635028 #>>44636374 #
ozgung ◴[] No.44626378[source]
> The excuse "but you earn six figures, what' $200/month to you?" doesn't really capture the issue here.

Just like every other subscription model, including the one in the Black Mirror episode, Common People. The value is too good to be true for the price at the beginning. But you become their prisoner in the long run, with increasing prices and degrading quality.

replies(3): >>44626418 #>>44630789 #>>44633302 #
lencastre ◴[] No.44626418[source]
Can you expand on your argument?
replies(6): >>44626510 #>>44626777 #>>44626945 #>>44626948 #>>44627096 #>>44627412 #
majormajor ◴[] No.44627412[source]
I don't think it's subscriptions so much as consumer startup pricing strategies:

Netflix/Hulu were "losing money on streaming"-level cheap.

Uber was "losing money on rides"-level cheap.

WeWork was "losing money on real-estate" level cheap.

Until someone releases wildly profitable LLM company financials it's reasonable to expect prices to go up in the future.

Course, advances in compute are much more reasonable to expect than advances in cheap media production, taxi driver availability, or office space. So there's a possibility it could be different. But that might require capabilities to hit a hard plateau so that the compute can keep up. And that might make it hard to justify the valuations some of these companies have... which could also lead to price hikes.

But I'm not as worried as others. None of these have lock-in. If the prices go up, I'm happy to cancel or stop using it.

For a current student or new grad who has only ever used the LLM tools, this could be a rougher transition...

Another thing that would change the calculation is if it becomes impossible to maintain large production-level systems competitively without these tools. That's presumably one of the things the companies are betting on. We'll see if they get there. At that point many of us probably have far bigger things to worry about.

replies(2): >>44628124 #>>44628540 #
bee_rider ◴[] No.44628124[source]
It isn’t even that unreasonable for the AI companies to not be profitable at the moment (they are probably betting they can decrease costs before they run out of money, and want to offer people something like what the final experience will be). But it’s totally bizarre that people are comparing the cost of running locally to the current investor-subsidized remote costs.

Eventually, these things should get closer. Eventually the hosted solutions have to make money. Then we’ll see if the costs of securing everything and paying some tech company CEO’s wage are higher than the benefits of centrally locating the inference machines. I expect local running will win, but the future is a mystery.

replies(1): >>44630507 #
1. andyferris ◴[] No.44630507{3}[source]
I think it’s the time slice problem.

Locally I need to pay for my GPU hardware 24x7. Some electricity but mostly going to be hardware cost at my scale (plus I have excess free energy to burn).

Remotely I probably use less than an hour of compute a day. And only workdays.

Combined with batching being computationally more efficient it’s hard to see anything other than local inference ALWAYS being 10x more expensive than data centre inference.

(Would hope and love to be proven wrong about this as it plays out - but that’s the way I see it now).