←back to thread

421 points briankelly | 8 comments | | HN request time: 0s | source | bottom
Show context
necovek ◴[] No.43575664[source]
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...

All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...

In a nutshell, if the rest of it is like this, it simply sucks.

replies(23): >>43575714 #>>43575764 #>>43575953 #>>43576545 #>>43576732 #>>43576977 #>>43577008 #>>43577017 #>>43577193 #>>43577214 #>>43577226 #>>43577314 #>>43577850 #>>43578934 #>>43578952 #>>43578973 #>>43579760 #>>43581498 #>>43582065 #>>43583922 #>>43585046 #>>43585094 #>>43587376 #
milicat ◴[] No.43575953[source]
The more I browse through this, the more I agree. I feel like one could delete almost all comments from that project without losing any information – which means, at least the variable naming is (probably?) sensible. Then again, I don't know the application domain.

Also…

  def _save_current_date_time(current_date_time_file: str, current_date_time: str) -> None:
    with Path(current_date_time_file).open("w") as f:
      f.write(current_date_time)
there is a lot of obviously useful abstraction being missed, wasting lines of code that will all need to be maintained.

The scary thing is: I have seen professional human developers write worse code.

replies(5): >>43576009 #>>43576011 #>>43576425 #>>43579037 #>>43579215 #
1. Aurornis ◴[] No.43576425[source]
> I feel like one could delete almost all comments from that project without losing any information

I far from a heavy LLM coder but I’ve noticed a massive excess of unnecessary comments in most output. I’m always deleting the obvious ones.

But then I started noticing that the comments seem to help the LLM navigate additional code changes. It’s like a big trail of breadcrumbs for the LLM to parse.

I wouldn’t be surprised if vibe coders get trained to leave the excess comments in place.

replies(4): >>43577118 #>>43578911 #>>43579671 #>>43579997 #
2. lolinder ◴[] No.43577118[source]
It doesn't hurt that the model vendors get paid by the token, so there's zero incentive to correct this pattern at the model layer.
replies(1): >>43578376 #
3. thesnide ◴[] No.43578376[source]
or the model get trained from teaching code which naturally contains lots of comments.

the dev is just lazy to not include them anymore, wheres the model doesn't really need to be lazy, as paid by the token

4. nostromo ◴[] No.43578911[source]
LLMs are also good at commenting on existing code.

It’s trivial to ask Claude via Cursor to add comments to illustrate how some code works. I’ve found this helpful with uncommented code I’m trying to follow.

I haven’t seen it hallucinate an incorrect comment yet, but sometimes it will comment a TODO that a section should be made more more clear. (Rude… haha)

replies(1): >>43580118 #
5. cztomsik ◴[] No.43579671[source]
More tokens -> more compute involved. Attention-based models work by attending every token with each other, so more tokens means not only having more time to "think" but also being able to think "better". That is also at least part of the reason why o1/o3/R1 can sometimes solve what other LLMs could not.

Anyway, I don't think any of the current LLMs are really good for coding. What it's good at is copy-pasting (with some minor changes) from the massive code corpus it has been pre-trained. For example, give it some Zig code and it's straight unable to solve even basic tasks. Same if you give it really unique task, or if you simply ask for potential improvements of your existing code. Very, very bad results, no signs of out-of-box thinking whatsoever.

BTW: I think what people are missing is that LLMs are really great at language modeling. I had great results, and boosts in productivity, just by being able to prepare the task specification, and do quick changes in that really easily. Once I have a good understanding of the problem, I can usually implement everything quickly, and do it in much much better way than any LLM can currently do.

replies(1): >>43587265 #
6. dkersten ◴[] No.43579997[source]
What’s worse, I get a lot of comments left saying what the AI did, not what the code does or why. Eg “moved this from file xy”, “code deleted because we have abc”, etc. Completely useless stuff that should be communicated in the chat window, not in the code.
7. pastage ◴[] No.43580118[source]
I have seldomly seen insightful comments from LLMs. It is usually better than "comment what the line does" usefull for getting a hint about undocumented code, but not by much. My experience is limited, but what I have I do agree with. As long as you keep on the beaten path it is ok. Comments are not such a thing.
8. Workaccount2 ◴[] No.43587265[source]
I have tried getting gemini 2.5 to output "token efficient" code, i.e. no comments, keep variables to 1 or 2 letters, try to keep code as condensed as possible.

It didn't work out that great. I think that all the context in the verbose coding it does actually helps it to write better code. Shedding context to free up tokens isn't so straightforward.