←back to thread

600 points antirez | 2 comments | | HN request time: 0s | source
Show context
dakiol ◴[] No.44625484[source]
> Gemini 2.5 PRO | Claude Opus 4

Whether it's vibe coding, agentic coding, or copy pasting from the web interface to your editor, it's still sad to see the normalization of private (i.e., paid) LLM models. I like the progress that LLMs introduce and I see them as a powerful tool, but I cannot understand how programmers (whether complete nobodies or popular figures) dont mind adding a strong dependency on a third party in order to keep programming. Programming used to be (and still is, to a large extent) an activity that can be done with open and free tools. I am afraid that in a few years, that will no longer be possible (as in most programmers will be so tied to a paid LLM, that not using them would be like not using an IDE or vim nowadays), since everyone is using private LLMs. The excuse "but you earn six figures, what' $200/month to you?" doesn't really capture the issue here.

replies(46): >>44625521 #>>44625545 #>>44625564 #>>44625827 #>>44625858 #>>44625864 #>>44625902 #>>44625949 #>>44626014 #>>44626067 #>>44626198 #>>44626312 #>>44626378 #>>44626479 #>>44626511 #>>44626543 #>>44626556 #>>44626981 #>>44627197 #>>44627415 #>>44627574 #>>44627684 #>>44627879 #>>44628044 #>>44628982 #>>44629019 #>>44629132 #>>44629916 #>>44630173 #>>44630178 #>>44630270 #>>44630351 #>>44630576 #>>44630808 #>>44630939 #>>44631290 #>>44632110 #>>44632489 #>>44632790 #>>44632809 #>>44633267 #>>44633559 #>>44633756 #>>44634841 #>>44635028 #>>44636374 #
simonw ◴[] No.44626556[source]
The models I can run locally aren't as good yet, and are way more expensive to operate.

Once it becomes economical to run a Claude 4 class model locally you'll see a lot more people doing that.

The closest you can get right now might be Kimi K2 on a pair of 512GB Mac Studios, at a cost of about $20,000.

replies(12): >>44627184 #>>44627617 #>>44627695 #>>44627852 #>>44628143 #>>44631034 #>>44631098 #>>44631352 #>>44631995 #>>44632684 #>>44633226 #>>44644288 #
smallerize ◴[] No.44627852[source]
I don't have to actually run it locally to remove lock-in. Several cloud providers offer full DeepSeek R1 or Kimi K2 for $2-3/million output tokens.
replies(1): >>44629572 #
ketzo ◴[] No.44629572[source]
In what ways is that better for you than using eg Claude? Aren’t you then just “locked in” to having a cloud provider which offers those models cheaply?
replies(1): >>44629709 #
viraptor ◴[] No.44629709[source]
Any provider can run Kimi (including yourself if you would get enough use out of it), but only one can run Claude.
replies(1): >>44633739 #
1. hhh ◴[] No.44633739[source]
Two can run Claude, AWS and Anthropic. Claude rollout on AWS is pretty good, but they do some weird stuff in estimating your quota usage thru your max_tokens parameter.

I trust AWS, but we also pay big bucks to them and have a reason to trust them.

replies(1): >>44641572 #
2. viraptor ◴[] No.44641572[source]
In a way... But it's still just because Anthropic lets them. Things can change at any point.