←back to thread

421 points briankelly | 3 comments | | HN request time: 0s | source
Show context
necovek ◴[] No.43575664[source]
The premise might possibly be true, but as an actually seasoned Python developer, I've taken a look at one file: https://github.com/dx-tooling/platform-problem-monitoring-co...

All of it smells of a (lousy) junior software engineer: from configuring root logger at the top, module level (which relies on module import caching not to be reapplied), over not using a stdlib config file parser and building one themselves, to a raciness in load_json where it's checked for file existence with an if and then carrying on as if the file is certainly there...

In a nutshell, if the rest of it is like this, it simply sucks.

replies(23): >>43575714 #>>43575764 #>>43575953 #>>43576545 #>>43576732 #>>43576977 #>>43577008 #>>43577017 #>>43577193 #>>43577214 #>>43577226 #>>43577314 #>>43577850 #>>43578934 #>>43578952 #>>43578973 #>>43579760 #>>43581498 #>>43582065 #>>43583922 #>>43585046 #>>43585094 #>>43587376 #
1. dheera ◴[] No.43578952[source]
I disagree, I think it's absolutely astounding that they've gotten this good in such a short time, and I think we'll get better models in the near future.

By the way, prompting models properly helps a lot for generating good code. They get lazy if you don't explicitly ask for well-written code (or put that in the system prompt).

It also helps immensely to have two contexts, one that generates the code and one that reviews it (and has a different system prompt).

replies(1): >>43579110 #
2. henrikschroder ◴[] No.43579110[source]
> They get lazy if you don't explicitly ask for well-written code (or put that in the system prompt).

This is insane on so many levels.

replies(1): >>43585690 #
3. globnomulous ◴[] No.43585690[source]
Computer, enhance 15 to 23.