←back to thread

421 points briankelly | 1 comments | | HN request time: 0.203s | source
Show context
necovek ◴[] No.43575664[source]
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...

All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...

In a nutshell, if the rest of it is like this, it simply sucks.

replies(23): >>43575714 #>>43575764 #>>43575953 #>>43576545 #>>43576732 #>>43576977 #>>43577008 #>>43577017 #>>43577193 #>>43577214 #>>43577226 #>>43577314 #>>43577850 #>>43578934 #>>43578952 #>>43578973 #>>43579760 #>>43581498 #>>43582065 #>>43583922 #>>43585046 #>>43585094 #>>43587376 #
milicat ◴[] No.43575953[source]
The more I browse through this, the more I agree. I feel like one could delete almost all comments from that project without losing any information – which means, at least the variable naming is (probably?) sensible. Then again, I don't know the application domain.

Also…

  def _save_current_date_time(current_date_time_file: str, current_date_time: str) -> None:
    with Path(current_date_time_file).open("w") as f:
      f.write(current_date_time)
there is a lot of obviously useful abstraction being missed, wasting lines of code that will all need to be maintained.

The scary thing is: I have seen professional human developers write worse code.

replies(5): >>43576009 #>>43576011 #>>43576425 #>>43579037 #>>43579215 #
Aurornis ◴[] No.43576425[source]
> I feel like one could delete almost all comments from that project without losing any information

I far from a heavy LLM coder but I’ve noticed a massive excess of unnecessary comments in most output. I’m always deleting the obvious ones.

But then I started noticing that the comments seem to help the LLM navigate additional code changes. It’s like a big trail of breadcrumbs for the LLM to parse.

I wouldn’t be surprised if vibe coders get trained to leave the excess comments in place.

replies(4): >>43577118 #>>43578911 #>>43579671 #>>43579997 #
cztomsik ◴[] No.43579671[source]
More tokens -> more compute involved. Attention-based models work by attending every token with each other, so more tokens means not only having more time to "think" but also being able to think "better". That is also at least part of the reason why o1/o3/R1 can sometimes solve what other LLMs could not.

Anyway, I don't think any of the current LLMs are really good for coding. What it's good at is copy-pasting (with some minor changes) from the massive code corpus it has been pre-trained. For example, give it some Zig code and it's straight unable to solve even basic tasks. Same if you give it really unique task, or if you simply ask for potential improvements of your existing code. Very, very bad results, no signs of out-of-box thinking whatsoever.

BTW: I think what people are missing is that LLMs are really great at language modeling. I had great results, and boosts in productivity, just by being able to prepare the task specification, and do quick changes in that really easily. Once I have a good understanding of the problem, I can usually implement everything quickly, and do it in much much better way than any LLM can currently do.

replies(1): >>43587265 #
1. Workaccount2 ◴[] No.43587265[source]
I have tried getting gemini 2.5 to output "token efficient" code, i.e. no comments, keep variables to 1 or 2 letters, try to keep code as condensed as possible.

It didn't work out that great. I think that all the context in the verbose coding it does actually helps it to write better code. Shedding context to free up tokens isn't so straightforward.