←back to thread

600 points antirez | 3 comments | | HN request time: 0s | source
Show context
dakiol ◴[] No.44625484[source]
> Gemini 2.5 PRO | Claude Opus 4

Whether it's vibe coding, agentic coding, or copy pasting from the web interface to your editor, it's still sad to see the normalization of private (i.e., paid) LLM models. I like the progress that LLMs introduce and I see them as a powerful tool, but I cannot understand how programmers (whether complete nobodies or popular figures) dont mind adding a strong dependency on a third party in order to keep programming. Programming used to be (and still is, to a large extent) an activity that can be done with open and free tools. I am afraid that in a few years, that will no longer be possible (as in most programmers will be so tied to a paid LLM, that not using them would be like not using an IDE or vim nowadays), since everyone is using private LLMs. The excuse "but you earn six figures, what' $200/month to you?" doesn't really capture the issue here.

replies(46): >>44625521 #>>44625545 #>>44625564 #>>44625827 #>>44625858 #>>44625864 #>>44625902 #>>44625949 #>>44626014 #>>44626067 #>>44626198 #>>44626312 #>>44626378 #>>44626479 #>>44626511 #>>44626543 #>>44626556 #>>44626981 #>>44627197 #>>44627415 #>>44627574 #>>44627684 #>>44627879 #>>44628044 #>>44628982 #>>44629019 #>>44629132 #>>44629916 #>>44630173 #>>44630178 #>>44630270 #>>44630351 #>>44630576 #>>44630808 #>>44630939 #>>44631290 #>>44632110 #>>44632489 #>>44632790 #>>44632809 #>>44633267 #>>44633559 #>>44633756 #>>44634841 #>>44635028 #>>44636374 #
20k ◴[] No.44628044[source]
+1, I use exclusively free tools for this exact reason. I've been using the same tools for 15 years now (GCC + IDE), and they work great

There is a 0% chance that I'm going to subscribe to being able to program, because its actively a terrible idea. You have to be very naïve to think that any of these companies are still going to be around and supporting your tools in 10-20 years time, so if you get proficient with them you're absolutely screwed

I've seen people say that AI agents are great because instead of using git directly, they can ask their AI agent to do it. Which would be fine if it was a free tool, but you're subscribing to the ability to even start and maintain projects

A lot of people are about to learn an extremely blunt lesson about capitalism

replies(1): >>44628786 #
1. moron4hire ◴[] No.44628786[source]
A lot of people's problems with Git would go away if they just took a weekend and "read the docs." It's shocking how resistant most people are to the idea of studying to improve their craft.

I've been spending time with my team, just a few hours a week, on training them on foundational things, vs every other team in the company just plodding along, trying to do things the same way they always have, which already wasn't working. It's gotten to where my small team of 4 is getting called in to clean up after these much larger teams fail to deliver. I'm pretty proud of my little junior devs.

replies(1): >>44730490 #
2. haskellshill ◴[] No.44730490[source]
This is a reply to an old comment https://news.ycombinator.com/item?id=44452679 (since I cannot reply in the original thread)

> Even assuming python's foreach loop in these cases get optimized down to a very bare for loop, the operations being performed are dominated by the looping logic itself, because the loop body is so simple.

> Each iteration of a for loop performs one index update and one termination comparison. For a simple body that is just an XOR, that's the difference between performing 5 operations (update, exit check, read array, XOR with value, XOR with index) per N elements in the one loop case versus 7 operations (update, exit, read array, XOR with value, then update, exit, XOR with index) in the two loop case. So we're looking at a 29% savings in operations.

> It gets worse if the looping structure does not optimize to a raw, most basic for loop and instead constructs some kind of lazy collection iterator generalized for all kinds of collections it could iterate over.

> The smaller the loop body, the higher the gains from optimizing the looping construct itself.

Let's test your claims

  import random
  import time
  n = int(1e7)
  A = list(range(1,n+1))
  random.shuffle(A)
  print("Removed:", A.pop())

  t = time.time()
  result = 0
  for idx,val in enumerate(A):
    result ^= idx+1
    result ^= val
  result ^= n
  print("1-loop:", time.time() - t)
  print("Missing:", result)

  t = time.time()
  result = 0
  for value in range(1, n + 1):
    result ^= value
  for value in A:
    result ^= value
  print("2-loop:", time.time() - t)
  print("Missing:", result)
A sample run gives:

  Removed: 2878763
  1-loop: 1.4764018058776855
  Missing: 2878763
  2-loop: 1.1730067729949951
  Missing: 2878763
And after swapping the order of the code blocks just to ensure there's nothing strange going on:

  Removed: 3217501
  2-loop: 1.200080156326294
  Missing: 3217501
  1-loop: 1.5053350925445557
  Missing: 3217501
So indeed we have about a 20% speedup, only in the complete opposite direction that you claimed we'd have. Perhaps it's best not to assume when talking about performance.
replies(1): >>44781988 #
3. moron4hire ◴[] No.44781988[source]
I think all you have managed to prove is A) Python is absurd, and B) you need to learn about appropriate boundaries and when to drop something.

This conversation was a lifetime ago. You couldn't reply to the original thread for a reason.