←back to thread

421 points briankelly | 4 comments | | HN request time: 1.731s | source
Show context
necovek ◴[] No.43575664[source]
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...

All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...

In a nutshell, if the rest of it is like this, it simply sucks.

replies(23): >>43575714 #>>43575764 #>>43575953 #>>43576545 #>>43576732 #>>43576977 #>>43577008 #>>43577017 #>>43577193 #>>43577214 #>>43577226 #>>43577314 #>>43577850 #>>43578934 #>>43578952 #>>43578973 #>>43579760 #>>43581498 #>>43582065 #>>43583922 #>>43585046 #>>43585094 #>>43587376 #
nottorp ◴[] No.43576545[source]
Here's a rl example from today:

I asked $random_llm to give me code to recursively scan a directory and give me a list of file names relative to the top directory scanned and their sizes.

It gave me working code. On my test data directory it needed ... 6.8 seconds.

After 5 min of eliminating obvious inefficiencies the new code needed ... 1.4 seconds. And i didn't even read the docs for the used functions yet, just changed what seemed to generate too many filesystem calls for each file.

replies(1): >>43576568 #
bongodongobob ◴[] No.43576568[source]
Nice, sounds like it saved you some time.
replies(1): >>43576603 #
nottorp ◴[] No.43576603[source]
You "AI" enthusiasts always try to find a positive spin :)

What if I had trusted the code? It was working after all.

I'm guessing that if i asked for string manipulation code it would have done something worth posting on accidentally quadratic.

replies(3): >>43577234 #>>43578219 #>>43579068 #
1. noisy_boy ◴[] No.43577234[source]
Depends on how toxic the culture is in your workplace. This could have been an opportunity to "work" on another JIRA task showing 600% improvement over AI generated code.
replies(1): >>43579155 #
2. nottorp ◴[] No.43579155[source]
I'll write that down for reference in case I do ever join an organization like that in the future, thanks.

600% improvement is worth what, 3 days of billable work if it lasts 5 minutes?

replies(1): >>43582519 #
3. noisy_boy ◴[] No.43582519[source]
Series of such "improvements" could be fame and fortune in your team/group/vertical. In such places, the guy who toots the loudest wins the most.
replies(1): >>43583491 #
4. nottorp ◴[] No.43583491{3}[source]
So THAT's why large organizations want "AI".

In such a place I should be a very loud advocate of LLMs, use them to generate 100% of my output for new tasks...

... and then "improve performance" by simply fixing all the obvious inefficiencies and brag about the 400% speedups.

Hmm. Next step: instruct the "AI" to use bubblesort.